00:00:00.001 Started by upstream project "autotest-per-patch" build number 132114 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.012 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.029 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.050 Using shallow fetch with depth 1 00:00:00.050 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.050 > git --version # timeout=10 00:00:00.081 > git --version # 'git version 2.39.2' 00:00:00.081 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.134 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.134 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.679 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.694 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.708 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.708 > git config core.sparsecheckout # timeout=10 00:00:02.720 > git read-tree -mu HEAD # timeout=10 00:00:02.737 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.755 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.756 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.872 [Pipeline] Start of Pipeline 00:00:02.890 [Pipeline] library 00:00:02.892 Loading library shm_lib@master 00:00:07.794 Library shm_lib@master is cached. Copying from home. 00:00:07.850 [Pipeline] node 00:00:07.908 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:07.910 [Pipeline] { 00:00:07.919 [Pipeline] catchError 00:00:07.920 [Pipeline] { 00:00:07.929 [Pipeline] wrap 00:00:07.937 [Pipeline] { 00:00:07.947 [Pipeline] stage 00:00:07.949 [Pipeline] { (Prologue) 00:00:07.965 [Pipeline] echo 00:00:07.967 Node: VM-host-SM17 00:00:07.973 [Pipeline] cleanWs 00:00:07.984 [WS-CLEANUP] Deleting project workspace... 00:00:07.984 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.990 [WS-CLEANUP] done 00:00:08.173 [Pipeline] setCustomBuildProperty 00:00:08.243 [Pipeline] httpRequest 00:00:08.848 [Pipeline] echo 00:00:08.850 Sorcerer 10.211.164.101 is alive 00:00:08.859 [Pipeline] retry 00:00:08.861 [Pipeline] { 00:00:08.876 [Pipeline] httpRequest 00:00:08.880 HttpMethod: GET 00:00:08.881 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.882 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.883 Response Code: HTTP/1.1 200 OK 00:00:08.884 Success: Status code 200 is in the accepted range: 200,404 00:00:08.885 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.028 [Pipeline] } 00:00:09.047 [Pipeline] // retry 00:00:09.055 [Pipeline] sh 00:00:09.335 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.350 [Pipeline] httpRequest 00:00:10.023 [Pipeline] echo 00:00:10.024 Sorcerer 10.211.164.101 is alive 00:00:10.034 [Pipeline] retry 00:00:10.037 [Pipeline] { 00:00:10.051 [Pipeline] httpRequest 00:00:10.056 HttpMethod: GET 00:00:10.057 URL: http://10.211.164.101/packages/spdk_88726e83bd5e0656d0ca5bcf945c82a0ab759303.tar.gz 00:00:10.057 Sending request to url: http://10.211.164.101/packages/spdk_88726e83bd5e0656d0ca5bcf945c82a0ab759303.tar.gz 00:00:10.058 Response Code: HTTP/1.1 200 OK 00:00:10.058 Success: Status code 200 is in the accepted range: 200,404 00:00:10.059 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_88726e83bd5e0656d0ca5bcf945c82a0ab759303.tar.gz 00:00:29.299 [Pipeline] } 00:00:29.319 [Pipeline] // retry 00:00:29.327 [Pipeline] sh 00:00:29.608 + tar --no-same-owner -xf spdk_88726e83bd5e0656d0ca5bcf945c82a0ab759303.tar.gz 00:00:32.904 [Pipeline] sh 00:00:33.185 + git -C spdk log --oneline -n5 00:00:33.185 88726e83b bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:00:33.185 f7bbde27e bdev: Add APIs get metadata config via desc depending on no_metadata option 00:00:33.185 adaafacab bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:00:33.185 31341da86 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:33.185 cfcfe6c3e bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:33.205 [Pipeline] writeFile 00:00:33.220 [Pipeline] sh 00:00:33.501 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:33.512 [Pipeline] sh 00:00:33.792 + cat autorun-spdk.conf 00:00:33.792 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.792 SPDK_RUN_ASAN=1 00:00:33.792 SPDK_RUN_UBSAN=1 00:00:33.792 SPDK_TEST_RAID=1 00:00:33.792 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.799 RUN_NIGHTLY=0 00:00:33.801 [Pipeline] } 00:00:33.815 [Pipeline] // stage 00:00:33.830 [Pipeline] stage 00:00:33.832 [Pipeline] { (Run VM) 00:00:33.845 [Pipeline] sh 00:00:34.125 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:34.125 + echo 'Start stage prepare_nvme.sh' 00:00:34.125 Start stage prepare_nvme.sh 00:00:34.125 + [[ -n 3 ]] 00:00:34.125 + disk_prefix=ex3 00:00:34.125 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:34.125 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:34.125 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:34.125 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.125 ++ SPDK_RUN_ASAN=1 00:00:34.125 ++ SPDK_RUN_UBSAN=1 00:00:34.125 ++ SPDK_TEST_RAID=1 00:00:34.125 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.125 ++ RUN_NIGHTLY=0 00:00:34.125 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:34.125 + nvme_files=() 00:00:34.125 + declare -A nvme_files 00:00:34.125 + backend_dir=/var/lib/libvirt/images/backends 00:00:34.125 + nvme_files['nvme.img']=5G 00:00:34.125 + nvme_files['nvme-cmb.img']=5G 00:00:34.125 + nvme_files['nvme-multi0.img']=4G 00:00:34.125 + nvme_files['nvme-multi1.img']=4G 00:00:34.125 + nvme_files['nvme-multi2.img']=4G 00:00:34.125 + nvme_files['nvme-openstack.img']=8G 00:00:34.125 + nvme_files['nvme-zns.img']=5G 00:00:34.125 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:34.125 + (( SPDK_TEST_FTL == 1 )) 00:00:34.125 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:34.125 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:34.125 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:34.125 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:34.125 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:34.125 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:34.125 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:34.125 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.125 + for nvme in "${!nvme_files[@]}" 00:00:34.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:34.384 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.384 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:34.384 + echo 'End stage prepare_nvme.sh' 00:00:34.384 End stage prepare_nvme.sh 00:00:34.396 [Pipeline] sh 00:00:34.676 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:34.676 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:34.676 00:00:34.677 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:34.677 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:34.677 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:34.677 HELP=0 00:00:34.677 DRY_RUN=0 00:00:34.677 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:34.677 NVME_DISKS_TYPE=nvme,nvme, 00:00:34.677 NVME_AUTO_CREATE=0 00:00:34.677 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:34.677 NVME_CMB=,, 00:00:34.677 NVME_PMR=,, 00:00:34.677 NVME_ZNS=,, 00:00:34.677 NVME_MS=,, 00:00:34.677 NVME_FDP=,, 00:00:34.677 SPDK_VAGRANT_DISTRO=fedora39 00:00:34.677 SPDK_VAGRANT_VMCPU=10 00:00:34.677 SPDK_VAGRANT_VMRAM=12288 00:00:34.677 SPDK_VAGRANT_PROVIDER=libvirt 00:00:34.677 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:34.677 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:34.677 SPDK_OPENSTACK_NETWORK=0 00:00:34.677 VAGRANT_PACKAGE_BOX=0 00:00:34.677 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:34.677 FORCE_DISTRO=true 00:00:34.677 VAGRANT_BOX_VERSION= 00:00:34.677 EXTRA_VAGRANTFILES= 00:00:34.677 NIC_MODEL=e1000 00:00:34.677 00:00:34.677 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:34.677 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:37.964 Bringing machine 'default' up with 'libvirt' provider... 00:00:38.223 ==> default: Creating image (snapshot of base box volume). 00:00:38.482 ==> default: Creating domain with the following settings... 00:00:38.482 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730896286_04c4936d421b34883527 00:00:38.482 ==> default: -- Domain type: kvm 00:00:38.482 ==> default: -- Cpus: 10 00:00:38.482 ==> default: -- Feature: acpi 00:00:38.482 ==> default: -- Feature: apic 00:00:38.482 ==> default: -- Feature: pae 00:00:38.482 ==> default: -- Memory: 12288M 00:00:38.482 ==> default: -- Memory Backing: hugepages: 00:00:38.482 ==> default: -- Management MAC: 00:00:38.482 ==> default: -- Loader: 00:00:38.482 ==> default: -- Nvram: 00:00:38.482 ==> default: -- Base box: spdk/fedora39 00:00:38.482 ==> default: -- Storage pool: default 00:00:38.482 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730896286_04c4936d421b34883527.img (20G) 00:00:38.482 ==> default: -- Volume Cache: default 00:00:38.482 ==> default: -- Kernel: 00:00:38.482 ==> default: -- Initrd: 00:00:38.482 ==> default: -- Graphics Type: vnc 00:00:38.482 ==> default: -- Graphics Port: -1 00:00:38.482 ==> default: -- Graphics IP: 127.0.0.1 00:00:38.482 ==> default: -- Graphics Password: Not defined 00:00:38.482 ==> default: -- Video Type: cirrus 00:00:38.482 ==> default: -- Video VRAM: 9216 00:00:38.482 ==> default: -- Sound Type: 00:00:38.482 ==> default: -- Keymap: en-us 00:00:38.482 ==> default: -- TPM Path: 00:00:38.482 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:38.482 ==> default: -- Command line args: 00:00:38.482 ==> default: -> value=-device, 00:00:38.482 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:38.482 ==> default: -> value=-drive, 00:00:38.482 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:38.482 ==> default: -> value=-device, 00:00:38.482 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.482 ==> default: -> value=-device, 00:00:38.482 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:38.482 ==> default: -> value=-drive, 00:00:38.482 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:38.482 ==> default: -> value=-device, 00:00:38.482 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.482 ==> default: -> value=-drive, 00:00:38.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:38.483 ==> default: -> value=-device, 00:00:38.483 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.483 ==> default: -> value=-drive, 00:00:38.483 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:38.483 ==> default: -> value=-device, 00:00:38.483 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.483 ==> default: Creating shared folders metadata... 00:00:38.483 ==> default: Starting domain. 00:00:40.387 ==> default: Waiting for domain to get an IP address... 00:00:55.312 ==> default: Waiting for SSH to become available... 00:00:56.689 ==> default: Configuring and enabling network interfaces... 00:01:00.876 default: SSH address: 192.168.121.243:22 00:01:00.876 default: SSH username: vagrant 00:01:00.876 default: SSH auth method: private key 00:01:02.779 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.895 ==> default: Mounting SSHFS shared folder... 00:01:11.838 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.838 ==> default: Checking Mount.. 00:01:13.215 ==> default: Folder Successfully Mounted! 00:01:13.215 ==> default: Running provisioner: file... 00:01:14.149 default: ~/.gitconfig => .gitconfig 00:01:14.408 00:01:14.408 SUCCESS! 00:01:14.408 00:01:14.408 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.408 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.408 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:14.408 00:01:14.416 [Pipeline] } 00:01:14.431 [Pipeline] // stage 00:01:14.441 [Pipeline] dir 00:01:14.441 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:14.443 [Pipeline] { 00:01:14.456 [Pipeline] catchError 00:01:14.458 [Pipeline] { 00:01:14.472 [Pipeline] sh 00:01:14.752 + vagrant ssh-config --host vagrant 00:01:14.752 + sed -ne /^Host/,$p 00:01:14.752 + tee ssh_conf 00:01:18.940 Host vagrant 00:01:18.940 HostName 192.168.121.243 00:01:18.940 User vagrant 00:01:18.940 Port 22 00:01:18.940 UserKnownHostsFile /dev/null 00:01:18.940 StrictHostKeyChecking no 00:01:18.941 PasswordAuthentication no 00:01:18.941 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:18.941 IdentitiesOnly yes 00:01:18.941 LogLevel FATAL 00:01:18.941 ForwardAgent yes 00:01:18.941 ForwardX11 yes 00:01:18.941 00:01:18.950 [Pipeline] withEnv 00:01:18.952 [Pipeline] { 00:01:18.960 [Pipeline] sh 00:01:19.235 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:19.235 source /etc/os-release 00:01:19.235 [[ -e /image.version ]] && img=$(< /image.version) 00:01:19.235 # Minimal, systemd-like check. 00:01:19.235 if [[ -e /.dockerenv ]]; then 00:01:19.235 # Clear garbage from the node's name: 00:01:19.235 # agt-er_autotest_547-896 -> autotest_547-896 00:01:19.235 # $HOSTNAME is the actual container id 00:01:19.235 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:19.235 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:19.235 # We can assume this is a mount from a host where container is running, 00:01:19.235 # so fetch its hostname to easily identify the target swarm worker. 00:01:19.235 container="$(< /etc/hostname) ($agent)" 00:01:19.235 else 00:01:19.235 # Fallback 00:01:19.235 container=$agent 00:01:19.235 fi 00:01:19.235 fi 00:01:19.235 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:19.235 00:01:19.505 [Pipeline] } 00:01:19.518 [Pipeline] // withEnv 00:01:19.526 [Pipeline] setCustomBuildProperty 00:01:19.539 [Pipeline] stage 00:01:19.541 [Pipeline] { (Tests) 00:01:19.555 [Pipeline] sh 00:01:19.834 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:20.107 [Pipeline] sh 00:01:20.400 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:20.674 [Pipeline] timeout 00:01:20.675 Timeout set to expire in 1 hr 30 min 00:01:20.677 [Pipeline] { 00:01:20.691 [Pipeline] sh 00:01:20.972 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:21.547 HEAD is now at 88726e83b bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:01:21.559 [Pipeline] sh 00:01:21.838 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:22.110 [Pipeline] sh 00:01:22.390 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:22.665 [Pipeline] sh 00:01:23.001 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:23.001 ++ readlink -f spdk_repo 00:01:23.001 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.001 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.001 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.001 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.001 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.001 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.001 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.001 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:23.001 + cd /home/vagrant/spdk_repo 00:01:23.001 + source /etc/os-release 00:01:23.001 ++ NAME='Fedora Linux' 00:01:23.001 ++ VERSION='39 (Cloud Edition)' 00:01:23.001 ++ ID=fedora 00:01:23.001 ++ VERSION_ID=39 00:01:23.001 ++ VERSION_CODENAME= 00:01:23.001 ++ PLATFORM_ID=platform:f39 00:01:23.001 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:23.001 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.001 ++ LOGO=fedora-logo-icon 00:01:23.001 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:23.001 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.001 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:23.001 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.001 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.001 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.001 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:23.001 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.001 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:23.001 ++ SUPPORT_END=2024-11-12 00:01:23.001 ++ VARIANT='Cloud Edition' 00:01:23.001 ++ VARIANT_ID=cloud 00:01:23.001 + uname -a 00:01:23.001 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:23.001 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:23.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:23.569 Hugepages 00:01:23.569 node hugesize free / total 00:01:23.569 node0 1048576kB 0 / 0 00:01:23.569 node0 2048kB 0 / 0 00:01:23.569 00:01:23.569 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.569 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:23.569 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:23.828 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:23.828 + rm -f /tmp/spdk-ld-path 00:01:23.828 + source autorun-spdk.conf 00:01:23.828 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.828 ++ SPDK_RUN_ASAN=1 00:01:23.828 ++ SPDK_RUN_UBSAN=1 00:01:23.828 ++ SPDK_TEST_RAID=1 00:01:23.828 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.828 ++ RUN_NIGHTLY=0 00:01:23.828 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.828 + [[ -n '' ]] 00:01:23.828 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:23.829 + for M in /var/spdk/build-*-manifest.txt 00:01:23.829 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.829 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.829 + for M in /var/spdk/build-*-manifest.txt 00:01:23.829 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.829 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.829 + for M in /var/spdk/build-*-manifest.txt 00:01:23.829 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.829 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.829 ++ uname 00:01:23.829 + [[ Linux == \L\i\n\u\x ]] 00:01:23.829 + sudo dmesg -T 00:01:23.829 + sudo dmesg --clear 00:01:23.829 + dmesg_pid=5213 00:01:23.829 + sudo dmesg -Tw 00:01:23.829 + [[ Fedora Linux == FreeBSD ]] 00:01:23.829 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.829 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.829 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.829 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.829 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.829 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.829 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.829 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.829 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.829 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.829 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.829 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.829 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.829 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.829 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.829 12:32:12 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:23.829 12:32:12 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.829 12:32:12 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.829 12:32:12 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:23.829 12:32:12 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:23.829 12:32:12 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:23.829 12:32:12 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.829 12:32:12 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:23.829 12:32:12 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.829 12:32:12 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.088 12:32:12 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:24.088 12:32:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.088 12:32:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.088 12:32:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.088 12:32:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.088 12:32:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.088 12:32:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.088 12:32:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.088 12:32:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.088 12:32:12 -- paths/export.sh@5 -- $ export PATH 00:01:24.088 12:32:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.088 12:32:12 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.088 12:32:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:24.088 12:32:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730896332.XXXXXX 00:01:24.088 12:32:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730896332.XbKaVM 00:01:24.088 12:32:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:24.088 12:32:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:24.088 12:32:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.088 12:32:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.088 12:32:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.088 12:32:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:24.088 12:32:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:24.088 12:32:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.088 12:32:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:24.088 12:32:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:24.088 12:32:12 -- pm/common@17 -- $ local monitor 00:01:24.088 12:32:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.088 12:32:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.088 12:32:12 -- pm/common@25 -- $ sleep 1 00:01:24.088 12:32:12 -- pm/common@21 -- $ date +%s 00:01:24.088 12:32:12 -- pm/common@21 -- $ date +%s 00:01:24.088 12:32:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730896332 00:01:24.088 12:32:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730896332 00:01:24.088 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730896332_collect-cpu-load.pm.log 00:01:24.088 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730896332_collect-vmstat.pm.log 00:01:25.025 12:32:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:25.025 12:32:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.025 12:32:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.025 12:32:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:25.025 12:32:13 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.025 Wed Nov 6 12:32:13 PM UTC 2024 00:01:25.025 12:32:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.025 v25.01-pre-179-g88726e83b 00:01:25.025 12:32:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:25.025 12:32:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:25.025 12:32:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:25.025 12:32:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:25.025 12:32:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.025 ************************************ 00:01:25.025 START TEST asan 00:01:25.025 ************************************ 00:01:25.025 using asan 00:01:25.025 12:32:13 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:25.025 00:01:25.025 real 0m0.000s 00:01:25.025 user 0m0.000s 00:01:25.025 sys 0m0.000s 00:01:25.025 12:32:13 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:25.025 12:32:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.025 ************************************ 00:01:25.025 END TEST asan 00:01:25.025 ************************************ 00:01:25.025 12:32:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.025 12:32:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.025 12:32:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:25.025 12:32:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:25.025 12:32:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.025 ************************************ 00:01:25.025 START TEST ubsan 00:01:25.025 ************************************ 00:01:25.025 using ubsan 00:01:25.025 12:32:13 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:25.025 00:01:25.025 real 0m0.000s 00:01:25.025 user 0m0.000s 00:01:25.025 sys 0m0.000s 00:01:25.025 12:32:13 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:25.025 ************************************ 00:01:25.025 END TEST ubsan 00:01:25.025 ************************************ 00:01:25.025 12:32:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.025 12:32:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.025 12:32:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.025 12:32:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.025 12:32:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.025 12:32:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.025 12:32:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.025 12:32:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.025 12:32:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.025 12:32:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:25.284 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:25.284 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:25.543 Using 'verbs' RDMA provider 00:01:41.361 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:53.859 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:53.859 Creating mk/config.mk...done. 00:01:53.859 Creating mk/cc.flags.mk...done. 00:01:53.859 Type 'make' to build. 00:01:53.859 12:32:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:53.859 12:32:41 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:53.859 12:32:41 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:53.859 12:32:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.859 ************************************ 00:01:53.859 START TEST make 00:01:53.859 ************************************ 00:01:53.859 12:32:41 make -- common/autotest_common.sh@1127 -- $ make -j10 00:01:53.859 make[1]: Nothing to be done for 'all'. 00:02:06.150 The Meson build system 00:02:06.150 Version: 1.5.0 00:02:06.150 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:06.150 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:06.150 Build type: native build 00:02:06.150 Program cat found: YES (/usr/bin/cat) 00:02:06.150 Project name: DPDK 00:02:06.150 Project version: 24.03.0 00:02:06.150 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.150 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.150 Host machine cpu family: x86_64 00:02:06.150 Host machine cpu: x86_64 00:02:06.150 Message: ## Building in Developer Mode ## 00:02:06.150 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.150 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.150 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.150 Program python3 found: YES (/usr/bin/python3) 00:02:06.150 Program cat found: YES (/usr/bin/cat) 00:02:06.150 Compiler for C supports arguments -march=native: YES 00:02:06.150 Checking for size of "void *" : 8 00:02:06.150 Checking for size of "void *" : 8 (cached) 00:02:06.150 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:06.150 Library m found: YES 00:02:06.150 Library numa found: YES 00:02:06.150 Has header "numaif.h" : YES 00:02:06.150 Library fdt found: NO 00:02:06.150 Library execinfo found: NO 00:02:06.150 Has header "execinfo.h" : YES 00:02:06.150 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.150 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.150 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.150 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.150 Run-time dependency openssl found: YES 3.1.1 00:02:06.150 Run-time dependency libpcap found: YES 1.10.4 00:02:06.150 Has header "pcap.h" with dependency libpcap: YES 00:02:06.150 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.150 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.150 Compiler for C supports arguments -Wformat: YES 00:02:06.150 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.150 Compiler for C supports arguments -Wformat-security: NO 00:02:06.150 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.150 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.150 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.150 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.150 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.150 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.150 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.150 Compiler for C supports arguments -Wundef: YES 00:02:06.150 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.150 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.150 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.150 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.150 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.150 Program objdump found: YES (/usr/bin/objdump) 00:02:06.150 Compiler for C supports arguments -mavx512f: YES 00:02:06.150 Checking if "AVX512 checking" compiles: YES 00:02:06.150 Fetching value of define "__SSE4_2__" : 1 00:02:06.151 Fetching value of define "__AES__" : 1 00:02:06.151 Fetching value of define "__AVX__" : 1 00:02:06.151 Fetching value of define "__AVX2__" : 1 00:02:06.151 Fetching value of define "__AVX512BW__" : (undefined) 00:02:06.151 Fetching value of define "__AVX512CD__" : (undefined) 00:02:06.151 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:06.151 Fetching value of define "__AVX512F__" : (undefined) 00:02:06.151 Fetching value of define "__AVX512VL__" : (undefined) 00:02:06.151 Fetching value of define "__PCLMUL__" : 1 00:02:06.151 Fetching value of define "__RDRND__" : 1 00:02:06.151 Fetching value of define "__RDSEED__" : 1 00:02:06.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.151 Fetching value of define "__znver1__" : (undefined) 00:02:06.151 Fetching value of define "__znver2__" : (undefined) 00:02:06.151 Fetching value of define "__znver3__" : (undefined) 00:02:06.151 Fetching value of define "__znver4__" : (undefined) 00:02:06.151 Library asan found: YES 00:02:06.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.151 Message: lib/log: Defining dependency "log" 00:02:06.151 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.151 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.151 Library rt found: YES 00:02:06.151 Checking for function "getentropy" : NO 00:02:06.151 Message: lib/eal: Defining dependency "eal" 00:02:06.151 Message: lib/ring: Defining dependency "ring" 00:02:06.151 Message: lib/rcu: Defining dependency "rcu" 00:02:06.151 Message: lib/mempool: Defining dependency "mempool" 00:02:06.151 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.151 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:06.151 Compiler for C supports arguments -mpclmul: YES 00:02:06.151 Compiler for C supports arguments -maes: YES 00:02:06.151 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.151 Compiler for C supports arguments -mavx512bw: YES 00:02:06.151 Compiler for C supports arguments -mavx512dq: YES 00:02:06.151 Compiler for C supports arguments -mavx512vl: YES 00:02:06.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.151 Compiler for C supports arguments -mavx2: YES 00:02:06.151 Compiler for C supports arguments -mavx: YES 00:02:06.151 Message: lib/net: Defining dependency "net" 00:02:06.151 Message: lib/meter: Defining dependency "meter" 00:02:06.151 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.151 Message: lib/pci: Defining dependency "pci" 00:02:06.151 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.151 Message: lib/hash: Defining dependency "hash" 00:02:06.151 Message: lib/timer: Defining dependency "timer" 00:02:06.151 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.151 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.151 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.151 Message: lib/power: Defining dependency "power" 00:02:06.151 Message: lib/reorder: Defining dependency "reorder" 00:02:06.151 Message: lib/security: Defining dependency "security" 00:02:06.151 Has header "linux/userfaultfd.h" : YES 00:02:06.151 Has header "linux/vduse.h" : YES 00:02:06.151 Message: lib/vhost: Defining dependency "vhost" 00:02:06.151 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.151 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.151 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.151 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.151 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.151 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.151 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.151 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.151 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.151 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.151 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:06.151 Configuring doxy-api-html.conf using configuration 00:02:06.151 Configuring doxy-api-man.conf using configuration 00:02:06.151 Program mandb found: YES (/usr/bin/mandb) 00:02:06.151 Program sphinx-build found: NO 00:02:06.151 Configuring rte_build_config.h using configuration 00:02:06.151 Message: 00:02:06.151 ================= 00:02:06.151 Applications Enabled 00:02:06.151 ================= 00:02:06.151 00:02:06.151 apps: 00:02:06.151 00:02:06.151 00:02:06.151 Message: 00:02:06.151 ================= 00:02:06.151 Libraries Enabled 00:02:06.151 ================= 00:02:06.151 00:02:06.151 libs: 00:02:06.151 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.151 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.151 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.151 00:02:06.151 Message: 00:02:06.151 =============== 00:02:06.151 Drivers Enabled 00:02:06.151 =============== 00:02:06.151 00:02:06.151 common: 00:02:06.151 00:02:06.151 bus: 00:02:06.151 pci, vdev, 00:02:06.151 mempool: 00:02:06.151 ring, 00:02:06.151 dma: 00:02:06.151 00:02:06.151 net: 00:02:06.151 00:02:06.151 crypto: 00:02:06.151 00:02:06.151 compress: 00:02:06.151 00:02:06.151 vdpa: 00:02:06.151 00:02:06.151 00:02:06.151 Message: 00:02:06.151 ================= 00:02:06.151 Content Skipped 00:02:06.151 ================= 00:02:06.151 00:02:06.151 apps: 00:02:06.151 dumpcap: explicitly disabled via build config 00:02:06.151 graph: explicitly disabled via build config 00:02:06.151 pdump: explicitly disabled via build config 00:02:06.151 proc-info: explicitly disabled via build config 00:02:06.151 test-acl: explicitly disabled via build config 00:02:06.151 test-bbdev: explicitly disabled via build config 00:02:06.151 test-cmdline: explicitly disabled via build config 00:02:06.151 test-compress-perf: explicitly disabled via build config 00:02:06.151 test-crypto-perf: explicitly disabled via build config 00:02:06.151 test-dma-perf: explicitly disabled via build config 00:02:06.151 test-eventdev: explicitly disabled via build config 00:02:06.151 test-fib: explicitly disabled via build config 00:02:06.151 test-flow-perf: explicitly disabled via build config 00:02:06.151 test-gpudev: explicitly disabled via build config 00:02:06.151 test-mldev: explicitly disabled via build config 00:02:06.151 test-pipeline: explicitly disabled via build config 00:02:06.151 test-pmd: explicitly disabled via build config 00:02:06.151 test-regex: explicitly disabled via build config 00:02:06.151 test-sad: explicitly disabled via build config 00:02:06.151 test-security-perf: explicitly disabled via build config 00:02:06.151 00:02:06.151 libs: 00:02:06.151 argparse: explicitly disabled via build config 00:02:06.151 metrics: explicitly disabled via build config 00:02:06.152 acl: explicitly disabled via build config 00:02:06.152 bbdev: explicitly disabled via build config 00:02:06.152 bitratestats: explicitly disabled via build config 00:02:06.152 bpf: explicitly disabled via build config 00:02:06.152 cfgfile: explicitly disabled via build config 00:02:06.152 distributor: explicitly disabled via build config 00:02:06.152 efd: explicitly disabled via build config 00:02:06.152 eventdev: explicitly disabled via build config 00:02:06.152 dispatcher: explicitly disabled via build config 00:02:06.152 gpudev: explicitly disabled via build config 00:02:06.152 gro: explicitly disabled via build config 00:02:06.152 gso: explicitly disabled via build config 00:02:06.152 ip_frag: explicitly disabled via build config 00:02:06.152 jobstats: explicitly disabled via build config 00:02:06.152 latencystats: explicitly disabled via build config 00:02:06.152 lpm: explicitly disabled via build config 00:02:06.152 member: explicitly disabled via build config 00:02:06.152 pcapng: explicitly disabled via build config 00:02:06.152 rawdev: explicitly disabled via build config 00:02:06.152 regexdev: explicitly disabled via build config 00:02:06.152 mldev: explicitly disabled via build config 00:02:06.152 rib: explicitly disabled via build config 00:02:06.152 sched: explicitly disabled via build config 00:02:06.152 stack: explicitly disabled via build config 00:02:06.152 ipsec: explicitly disabled via build config 00:02:06.152 pdcp: explicitly disabled via build config 00:02:06.152 fib: explicitly disabled via build config 00:02:06.152 port: explicitly disabled via build config 00:02:06.152 pdump: explicitly disabled via build config 00:02:06.152 table: explicitly disabled via build config 00:02:06.152 pipeline: explicitly disabled via build config 00:02:06.152 graph: explicitly disabled via build config 00:02:06.152 node: explicitly disabled via build config 00:02:06.152 00:02:06.152 drivers: 00:02:06.152 common/cpt: not in enabled drivers build config 00:02:06.152 common/dpaax: not in enabled drivers build config 00:02:06.152 common/iavf: not in enabled drivers build config 00:02:06.152 common/idpf: not in enabled drivers build config 00:02:06.152 common/ionic: not in enabled drivers build config 00:02:06.152 common/mvep: not in enabled drivers build config 00:02:06.152 common/octeontx: not in enabled drivers build config 00:02:06.152 bus/auxiliary: not in enabled drivers build config 00:02:06.152 bus/cdx: not in enabled drivers build config 00:02:06.152 bus/dpaa: not in enabled drivers build config 00:02:06.152 bus/fslmc: not in enabled drivers build config 00:02:06.152 bus/ifpga: not in enabled drivers build config 00:02:06.152 bus/platform: not in enabled drivers build config 00:02:06.152 bus/uacce: not in enabled drivers build config 00:02:06.152 bus/vmbus: not in enabled drivers build config 00:02:06.152 common/cnxk: not in enabled drivers build config 00:02:06.152 common/mlx5: not in enabled drivers build config 00:02:06.152 common/nfp: not in enabled drivers build config 00:02:06.152 common/nitrox: not in enabled drivers build config 00:02:06.152 common/qat: not in enabled drivers build config 00:02:06.152 common/sfc_efx: not in enabled drivers build config 00:02:06.152 mempool/bucket: not in enabled drivers build config 00:02:06.152 mempool/cnxk: not in enabled drivers build config 00:02:06.152 mempool/dpaa: not in enabled drivers build config 00:02:06.152 mempool/dpaa2: not in enabled drivers build config 00:02:06.152 mempool/octeontx: not in enabled drivers build config 00:02:06.152 mempool/stack: not in enabled drivers build config 00:02:06.152 dma/cnxk: not in enabled drivers build config 00:02:06.152 dma/dpaa: not in enabled drivers build config 00:02:06.152 dma/dpaa2: not in enabled drivers build config 00:02:06.152 dma/hisilicon: not in enabled drivers build config 00:02:06.152 dma/idxd: not in enabled drivers build config 00:02:06.152 dma/ioat: not in enabled drivers build config 00:02:06.152 dma/skeleton: not in enabled drivers build config 00:02:06.152 net/af_packet: not in enabled drivers build config 00:02:06.152 net/af_xdp: not in enabled drivers build config 00:02:06.152 net/ark: not in enabled drivers build config 00:02:06.152 net/atlantic: not in enabled drivers build config 00:02:06.152 net/avp: not in enabled drivers build config 00:02:06.152 net/axgbe: not in enabled drivers build config 00:02:06.152 net/bnx2x: not in enabled drivers build config 00:02:06.152 net/bnxt: not in enabled drivers build config 00:02:06.152 net/bonding: not in enabled drivers build config 00:02:06.152 net/cnxk: not in enabled drivers build config 00:02:06.152 net/cpfl: not in enabled drivers build config 00:02:06.152 net/cxgbe: not in enabled drivers build config 00:02:06.152 net/dpaa: not in enabled drivers build config 00:02:06.152 net/dpaa2: not in enabled drivers build config 00:02:06.152 net/e1000: not in enabled drivers build config 00:02:06.152 net/ena: not in enabled drivers build config 00:02:06.152 net/enetc: not in enabled drivers build config 00:02:06.152 net/enetfec: not in enabled drivers build config 00:02:06.152 net/enic: not in enabled drivers build config 00:02:06.152 net/failsafe: not in enabled drivers build config 00:02:06.152 net/fm10k: not in enabled drivers build config 00:02:06.152 net/gve: not in enabled drivers build config 00:02:06.152 net/hinic: not in enabled drivers build config 00:02:06.152 net/hns3: not in enabled drivers build config 00:02:06.152 net/i40e: not in enabled drivers build config 00:02:06.152 net/iavf: not in enabled drivers build config 00:02:06.152 net/ice: not in enabled drivers build config 00:02:06.152 net/idpf: not in enabled drivers build config 00:02:06.152 net/igc: not in enabled drivers build config 00:02:06.152 net/ionic: not in enabled drivers build config 00:02:06.152 net/ipn3ke: not in enabled drivers build config 00:02:06.152 net/ixgbe: not in enabled drivers build config 00:02:06.152 net/mana: not in enabled drivers build config 00:02:06.152 net/memif: not in enabled drivers build config 00:02:06.152 net/mlx4: not in enabled drivers build config 00:02:06.152 net/mlx5: not in enabled drivers build config 00:02:06.152 net/mvneta: not in enabled drivers build config 00:02:06.152 net/mvpp2: not in enabled drivers build config 00:02:06.152 net/netvsc: not in enabled drivers build config 00:02:06.152 net/nfb: not in enabled drivers build config 00:02:06.152 net/nfp: not in enabled drivers build config 00:02:06.152 net/ngbe: not in enabled drivers build config 00:02:06.152 net/null: not in enabled drivers build config 00:02:06.152 net/octeontx: not in enabled drivers build config 00:02:06.152 net/octeon_ep: not in enabled drivers build config 00:02:06.152 net/pcap: not in enabled drivers build config 00:02:06.152 net/pfe: not in enabled drivers build config 00:02:06.152 net/qede: not in enabled drivers build config 00:02:06.152 net/ring: not in enabled drivers build config 00:02:06.152 net/sfc: not in enabled drivers build config 00:02:06.152 net/softnic: not in enabled drivers build config 00:02:06.152 net/tap: not in enabled drivers build config 00:02:06.152 net/thunderx: not in enabled drivers build config 00:02:06.152 net/txgbe: not in enabled drivers build config 00:02:06.152 net/vdev_netvsc: not in enabled drivers build config 00:02:06.152 net/vhost: not in enabled drivers build config 00:02:06.152 net/virtio: not in enabled drivers build config 00:02:06.152 net/vmxnet3: not in enabled drivers build config 00:02:06.152 raw/*: missing internal dependency, "rawdev" 00:02:06.152 crypto/armv8: not in enabled drivers build config 00:02:06.152 crypto/bcmfs: not in enabled drivers build config 00:02:06.152 crypto/caam_jr: not in enabled drivers build config 00:02:06.152 crypto/ccp: not in enabled drivers build config 00:02:06.152 crypto/cnxk: not in enabled drivers build config 00:02:06.152 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.152 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.152 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.152 crypto/mlx5: not in enabled drivers build config 00:02:06.152 crypto/mvsam: not in enabled drivers build config 00:02:06.152 crypto/nitrox: not in enabled drivers build config 00:02:06.152 crypto/null: not in enabled drivers build config 00:02:06.152 crypto/octeontx: not in enabled drivers build config 00:02:06.152 crypto/openssl: not in enabled drivers build config 00:02:06.152 crypto/scheduler: not in enabled drivers build config 00:02:06.152 crypto/uadk: not in enabled drivers build config 00:02:06.152 crypto/virtio: not in enabled drivers build config 00:02:06.152 compress/isal: not in enabled drivers build config 00:02:06.152 compress/mlx5: not in enabled drivers build config 00:02:06.152 compress/nitrox: not in enabled drivers build config 00:02:06.152 compress/octeontx: not in enabled drivers build config 00:02:06.152 compress/zlib: not in enabled drivers build config 00:02:06.153 regex/*: missing internal dependency, "regexdev" 00:02:06.153 ml/*: missing internal dependency, "mldev" 00:02:06.153 vdpa/ifc: not in enabled drivers build config 00:02:06.153 vdpa/mlx5: not in enabled drivers build config 00:02:06.153 vdpa/nfp: not in enabled drivers build config 00:02:06.153 vdpa/sfc: not in enabled drivers build config 00:02:06.153 event/*: missing internal dependency, "eventdev" 00:02:06.153 baseband/*: missing internal dependency, "bbdev" 00:02:06.153 gpu/*: missing internal dependency, "gpudev" 00:02:06.153 00:02:06.153 00:02:06.153 Build targets in project: 85 00:02:06.153 00:02:06.153 DPDK 24.03.0 00:02:06.153 00:02:06.153 User defined options 00:02:06.153 buildtype : debug 00:02:06.153 default_library : shared 00:02:06.153 libdir : lib 00:02:06.153 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.153 b_sanitize : address 00:02:06.153 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.153 c_link_args : 00:02:06.153 cpu_instruction_set: native 00:02:06.153 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:06.153 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:06.153 enable_docs : false 00:02:06.153 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.153 enable_kmods : false 00:02:06.153 max_lcores : 128 00:02:06.153 tests : false 00:02:06.153 00:02:06.153 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.153 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:06.153 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.153 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.153 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.153 [4/268] Linking static target lib/librte_log.a 00:02:06.153 [5/268] Linking static target lib/librte_kvargs.a 00:02:06.153 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.769 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.769 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.028 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.028 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.028 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.028 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.028 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.028 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.028 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.028 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.286 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.286 [18/268] Linking target lib/librte_log.so.24.1 00:02:07.286 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.286 [20/268] Linking static target lib/librte_telemetry.a 00:02:07.545 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:07.545 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:07.805 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.805 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:07.805 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:07.805 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.063 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.063 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.063 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.063 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.322 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.322 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.322 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.322 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.581 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.581 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.581 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.840 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.840 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.840 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.840 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.840 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.098 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.098 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.357 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.357 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.615 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.615 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.615 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.874 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.874 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.874 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.874 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.874 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.133 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.133 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.392 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.651 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.651 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.651 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.651 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.651 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.651 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.651 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.911 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.911 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.168 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.426 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.426 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.426 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.426 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.685 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.685 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.685 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.685 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.946 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.946 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.946 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.946 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.946 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.205 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.205 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.464 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.464 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.464 [85/268] Linking static target lib/librte_ring.a 00:02:12.724 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.724 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.724 [88/268] Linking static target lib/librte_eal.a 00:02:12.724 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.983 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.983 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.983 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.983 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.242 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.242 [95/268] Linking static target lib/librte_mempool.a 00:02:13.242 [96/268] Linking static target lib/librte_rcu.a 00:02:13.242 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.242 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.242 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.501 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.760 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.760 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.760 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.760 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.760 [105/268] Linking static target lib/librte_mbuf.a 00:02:14.019 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.019 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.019 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.019 [109/268] Linking static target lib/librte_meter.a 00:02:14.278 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.278 [111/268] Linking static target lib/librte_net.a 00:02:14.279 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.538 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.538 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.538 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.538 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.797 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.797 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.057 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.315 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.574 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.574 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.872 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.872 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.872 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.872 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.872 [127/268] Linking static target lib/librte_pci.a 00:02:15.872 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.130 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.130 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.388 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.388 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.388 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.388 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.388 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.388 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.388 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.388 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.388 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.388 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.647 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.647 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.647 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.647 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.906 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.906 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.906 [147/268] Linking static target lib/librte_cmdline.a 00:02:17.165 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.165 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.424 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.424 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.424 [152/268] Linking static target lib/librte_timer.a 00:02:17.683 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.683 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:17.943 [155/268] Linking static target lib/librte_ethdev.a 00:02:17.943 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.943 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.201 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.201 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.201 [160/268] Linking static target lib/librte_compressdev.a 00:02:18.201 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.201 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.460 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.460 [164/268] Linking static target lib/librte_hash.a 00:02:18.460 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.719 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.719 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.719 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.978 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.978 [170/268] Linking static target lib/librte_dmadev.a 00:02:18.978 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.237 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.237 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.237 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.496 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.496 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.755 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.755 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.755 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.014 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.014 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.014 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.581 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.581 [184/268] Linking static target lib/librte_power.a 00:02:20.581 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.581 [186/268] Linking static target lib/librte_cryptodev.a 00:02:20.840 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.840 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.840 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.840 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.840 [191/268] Linking static target lib/librte_security.a 00:02:20.840 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.840 [193/268] Linking static target lib/librte_reorder.a 00:02:21.776 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.776 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.776 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.776 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.776 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.035 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.293 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.552 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.811 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.811 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.811 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.811 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.069 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.328 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.328 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.328 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.586 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.586 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.586 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.845 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.845 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.845 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.845 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.845 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.845 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.845 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.845 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:23.845 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.103 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.103 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.103 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.104 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.104 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.362 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.298 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.298 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.298 [230/268] Linking target lib/librte_eal.so.24.1 00:02:25.298 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.298 [232/268] Linking target lib/librte_meter.so.24.1 00:02:25.298 [233/268] Linking target lib/librte_ring.so.24.1 00:02:25.298 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.298 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.298 [236/268] Linking target lib/librte_pci.so.24.1 00:02:25.298 [237/268] Linking target lib/librte_timer.so.24.1 00:02:25.557 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.558 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.558 [240/268] Linking target lib/librte_rcu.so.24.1 00:02:25.558 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.558 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.558 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:25.558 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.558 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.558 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.816 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.817 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.817 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.817 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.075 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:26.075 [252/268] Linking target lib/librte_net.so.24.1 00:02:26.075 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:26.075 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:26.075 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.075 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.075 [257/268] Linking target lib/librte_hash.so.24.1 00:02:26.075 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:26.075 [259/268] Linking target lib/librte_security.so.24.1 00:02:26.333 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.333 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.333 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.592 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.592 [264/268] Linking target lib/librte_power.so.24.1 00:02:29.875 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.875 [266/268] Linking static target lib/librte_vhost.a 00:02:31.249 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.249 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:31.249 INFO: autodetecting backend as ninja 00:02:31.249 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.185 CC lib/log/log.o 00:02:53.185 CC lib/log/log_flags.o 00:02:53.185 CC lib/log/log_deprecated.o 00:02:53.185 CC lib/ut/ut.o 00:02:53.185 CC lib/ut_mock/mock.o 00:02:53.185 LIB libspdk_ut_mock.a 00:02:53.185 LIB libspdk_ut.a 00:02:53.185 LIB libspdk_log.a 00:02:53.185 SO libspdk_ut.so.2.0 00:02:53.185 SO libspdk_ut_mock.so.6.0 00:02:53.185 SO libspdk_log.so.7.1 00:02:53.185 SYMLINK libspdk_ut_mock.so 00:02:53.185 SYMLINK libspdk_ut.so 00:02:53.185 SYMLINK libspdk_log.so 00:02:53.444 CXX lib/trace_parser/trace.o 00:02:53.444 CC lib/ioat/ioat.o 00:02:53.444 CC lib/dma/dma.o 00:02:53.444 CC lib/util/base64.o 00:02:53.444 CC lib/util/bit_array.o 00:02:53.444 CC lib/util/cpuset.o 00:02:53.444 CC lib/util/crc16.o 00:02:53.444 CC lib/util/crc32.o 00:02:53.444 CC lib/util/crc32c.o 00:02:53.444 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.444 CC lib/vfio_user/host/vfio_user.o 00:02:53.444 CC lib/util/crc32_ieee.o 00:02:53.444 CC lib/util/crc64.o 00:02:53.702 CC lib/util/dif.o 00:02:53.702 LIB libspdk_dma.a 00:02:53.702 CC lib/util/fd.o 00:02:53.702 SO libspdk_dma.so.5.0 00:02:53.702 CC lib/util/fd_group.o 00:02:53.702 LIB libspdk_ioat.a 00:02:53.702 SYMLINK libspdk_dma.so 00:02:53.702 CC lib/util/file.o 00:02:53.702 CC lib/util/hexlify.o 00:02:53.702 CC lib/util/iov.o 00:02:53.702 SO libspdk_ioat.so.7.0 00:02:53.702 SYMLINK libspdk_ioat.so 00:02:53.702 CC lib/util/math.o 00:02:53.702 CC lib/util/net.o 00:02:53.702 CC lib/util/pipe.o 00:02:53.961 CC lib/util/strerror_tls.o 00:02:53.961 CC lib/util/string.o 00:02:53.961 LIB libspdk_vfio_user.a 00:02:53.961 SO libspdk_vfio_user.so.5.0 00:02:53.961 CC lib/util/uuid.o 00:02:53.961 CC lib/util/xor.o 00:02:53.961 SYMLINK libspdk_vfio_user.so 00:02:53.961 CC lib/util/zipf.o 00:02:53.961 CC lib/util/md5.o 00:02:54.527 LIB libspdk_util.a 00:02:54.527 SO libspdk_util.so.10.1 00:02:54.527 LIB libspdk_trace_parser.a 00:02:54.786 SYMLINK libspdk_util.so 00:02:54.786 SO libspdk_trace_parser.so.6.0 00:02:54.786 SYMLINK libspdk_trace_parser.so 00:02:54.786 CC lib/env_dpdk/env.o 00:02:54.786 CC lib/env_dpdk/memory.o 00:02:54.786 CC lib/env_dpdk/pci.o 00:02:54.786 CC lib/env_dpdk/threads.o 00:02:54.786 CC lib/env_dpdk/init.o 00:02:54.786 CC lib/json/json_parse.o 00:02:54.786 CC lib/vmd/vmd.o 00:02:54.786 CC lib/idxd/idxd.o 00:02:54.786 CC lib/conf/conf.o 00:02:54.786 CC lib/rdma_utils/rdma_utils.o 00:02:55.045 CC lib/idxd/idxd_user.o 00:02:55.045 LIB libspdk_conf.a 00:02:55.045 SO libspdk_conf.so.6.0 00:02:55.045 CC lib/json/json_util.o 00:02:55.302 LIB libspdk_rdma_utils.a 00:02:55.302 SYMLINK libspdk_conf.so 00:02:55.302 CC lib/idxd/idxd_kernel.o 00:02:55.302 SO libspdk_rdma_utils.so.1.0 00:02:55.302 SYMLINK libspdk_rdma_utils.so 00:02:55.302 CC lib/env_dpdk/pci_ioat.o 00:02:55.302 CC lib/json/json_write.o 00:02:55.302 CC lib/vmd/led.o 00:02:55.302 CC lib/env_dpdk/pci_virtio.o 00:02:55.560 CC lib/env_dpdk/pci_vmd.o 00:02:55.560 CC lib/env_dpdk/pci_idxd.o 00:02:55.560 CC lib/rdma_provider/common.o 00:02:55.560 CC lib/env_dpdk/pci_event.o 00:02:55.560 CC lib/env_dpdk/sigbus_handler.o 00:02:55.560 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.560 CC lib/env_dpdk/pci_dpdk.o 00:02:55.560 LIB libspdk_json.a 00:02:55.560 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.560 SO libspdk_json.so.6.0 00:02:55.560 LIB libspdk_idxd.a 00:02:55.560 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.819 SO libspdk_idxd.so.12.1 00:02:55.819 LIB libspdk_vmd.a 00:02:55.819 SYMLINK libspdk_json.so 00:02:55.819 SO libspdk_vmd.so.6.0 00:02:55.819 SYMLINK libspdk_idxd.so 00:02:55.819 LIB libspdk_rdma_provider.a 00:02:55.819 SYMLINK libspdk_vmd.so 00:02:55.819 SO libspdk_rdma_provider.so.7.0 00:02:55.819 SYMLINK libspdk_rdma_provider.so 00:02:56.078 CC lib/jsonrpc/jsonrpc_server.o 00:02:56.078 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:56.078 CC lib/jsonrpc/jsonrpc_client.o 00:02:56.078 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:56.336 LIB libspdk_jsonrpc.a 00:02:56.336 SO libspdk_jsonrpc.so.6.0 00:02:56.594 SYMLINK libspdk_jsonrpc.so 00:02:56.854 CC lib/rpc/rpc.o 00:02:57.115 LIB libspdk_env_dpdk.a 00:02:57.115 SO libspdk_env_dpdk.so.15.1 00:02:57.115 LIB libspdk_rpc.a 00:02:57.115 SO libspdk_rpc.so.6.0 00:02:57.115 SYMLINK libspdk_rpc.so 00:02:57.374 SYMLINK libspdk_env_dpdk.so 00:02:57.374 CC lib/keyring/keyring.o 00:02:57.374 CC lib/notify/notify.o 00:02:57.374 CC lib/notify/notify_rpc.o 00:02:57.374 CC lib/keyring/keyring_rpc.o 00:02:57.374 CC lib/trace/trace.o 00:02:57.374 CC lib/trace/trace_flags.o 00:02:57.374 CC lib/trace/trace_rpc.o 00:02:57.632 LIB libspdk_notify.a 00:02:57.632 SO libspdk_notify.so.6.0 00:02:57.890 LIB libspdk_keyring.a 00:02:57.890 SYMLINK libspdk_notify.so 00:02:57.890 SO libspdk_keyring.so.2.0 00:02:57.890 LIB libspdk_trace.a 00:02:57.890 SYMLINK libspdk_keyring.so 00:02:57.890 SO libspdk_trace.so.11.0 00:02:57.890 SYMLINK libspdk_trace.so 00:02:58.148 CC lib/thread/thread.o 00:02:58.148 CC lib/thread/iobuf.o 00:02:58.148 CC lib/sock/sock.o 00:02:58.148 CC lib/sock/sock_rpc.o 00:02:58.715 LIB libspdk_sock.a 00:02:58.973 SO libspdk_sock.so.10.0 00:02:58.973 SYMLINK libspdk_sock.so 00:02:59.231 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.231 CC lib/nvme/nvme_fabric.o 00:02:59.231 CC lib/nvme/nvme_ctrlr.o 00:02:59.231 CC lib/nvme/nvme_pcie.o 00:02:59.231 CC lib/nvme/nvme_ns_cmd.o 00:02:59.231 CC lib/nvme/nvme_pcie_common.o 00:02:59.231 CC lib/nvme/nvme_ns.o 00:02:59.231 CC lib/nvme/nvme_qpair.o 00:02:59.231 CC lib/nvme/nvme.o 00:03:00.165 CC lib/nvme/nvme_quirks.o 00:03:00.165 CC lib/nvme/nvme_transport.o 00:03:00.165 CC lib/nvme/nvme_discovery.o 00:03:00.165 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:00.423 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:00.423 CC lib/nvme/nvme_tcp.o 00:03:00.423 LIB libspdk_thread.a 00:03:00.423 CC lib/nvme/nvme_opal.o 00:03:00.423 SO libspdk_thread.so.11.0 00:03:00.681 SYMLINK libspdk_thread.so 00:03:00.681 CC lib/nvme/nvme_io_msg.o 00:03:00.681 CC lib/nvme/nvme_poll_group.o 00:03:00.681 CC lib/nvme/nvme_zns.o 00:03:00.938 CC lib/nvme/nvme_stubs.o 00:03:00.938 CC lib/nvme/nvme_auth.o 00:03:00.938 CC lib/nvme/nvme_cuse.o 00:03:00.938 CC lib/nvme/nvme_rdma.o 00:03:01.195 CC lib/accel/accel.o 00:03:01.452 CC lib/accel/accel_rpc.o 00:03:01.452 CC lib/accel/accel_sw.o 00:03:01.710 CC lib/blob/blobstore.o 00:03:01.710 CC lib/init/json_config.o 00:03:01.969 CC lib/init/subsystem.o 00:03:01.969 CC lib/virtio/virtio.o 00:03:01.969 CC lib/init/subsystem_rpc.o 00:03:01.969 CC lib/blob/request.o 00:03:02.227 CC lib/blob/zeroes.o 00:03:02.227 CC lib/blob/blob_bs_dev.o 00:03:02.227 CC lib/virtio/virtio_vhost_user.o 00:03:02.227 CC lib/init/rpc.o 00:03:02.227 CC lib/virtio/virtio_vfio_user.o 00:03:02.227 CC lib/virtio/virtio_pci.o 00:03:02.485 LIB libspdk_init.a 00:03:02.485 CC lib/fsdev/fsdev.o 00:03:02.485 CC lib/fsdev/fsdev_io.o 00:03:02.485 SO libspdk_init.so.6.0 00:03:02.485 CC lib/fsdev/fsdev_rpc.o 00:03:02.485 SYMLINK libspdk_init.so 00:03:02.743 LIB libspdk_accel.a 00:03:02.743 LIB libspdk_virtio.a 00:03:02.743 CC lib/event/app.o 00:03:02.743 CC lib/event/reactor.o 00:03:02.743 CC lib/event/log_rpc.o 00:03:02.743 CC lib/event/app_rpc.o 00:03:02.743 SO libspdk_accel.so.16.0 00:03:02.743 SO libspdk_virtio.so.7.0 00:03:02.743 LIB libspdk_nvme.a 00:03:02.743 SYMLINK libspdk_virtio.so 00:03:02.743 CC lib/event/scheduler_static.o 00:03:02.743 SYMLINK libspdk_accel.so 00:03:03.002 SO libspdk_nvme.so.15.0 00:03:03.002 CC lib/bdev/bdev.o 00:03:03.002 CC lib/bdev/bdev_zone.o 00:03:03.002 CC lib/bdev/bdev_rpc.o 00:03:03.002 CC lib/bdev/part.o 00:03:03.002 CC lib/bdev/scsi_nvme.o 00:03:03.262 LIB libspdk_fsdev.a 00:03:03.262 SYMLINK libspdk_nvme.so 00:03:03.262 SO libspdk_fsdev.so.2.0 00:03:03.262 LIB libspdk_event.a 00:03:03.523 SO libspdk_event.so.14.0 00:03:03.523 SYMLINK libspdk_fsdev.so 00:03:03.523 SYMLINK libspdk_event.so 00:03:03.781 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:04.717 LIB libspdk_fuse_dispatcher.a 00:03:04.717 SO libspdk_fuse_dispatcher.so.1.0 00:03:04.717 SYMLINK libspdk_fuse_dispatcher.so 00:03:06.620 LIB libspdk_blob.a 00:03:06.620 SO libspdk_blob.so.11.0 00:03:06.621 SYMLINK libspdk_blob.so 00:03:06.879 CC lib/blobfs/tree.o 00:03:06.879 CC lib/blobfs/blobfs.o 00:03:06.879 CC lib/lvol/lvol.o 00:03:06.879 LIB libspdk_bdev.a 00:03:07.138 SO libspdk_bdev.so.17.0 00:03:07.138 SYMLINK libspdk_bdev.so 00:03:07.396 CC lib/scsi/lun.o 00:03:07.396 CC lib/scsi/dev.o 00:03:07.396 CC lib/scsi/port.o 00:03:07.396 CC lib/scsi/scsi.o 00:03:07.396 CC lib/nvmf/ctrlr.o 00:03:07.396 CC lib/nbd/nbd.o 00:03:07.396 CC lib/ublk/ublk.o 00:03:07.396 CC lib/ftl/ftl_core.o 00:03:07.654 CC lib/scsi/scsi_bdev.o 00:03:07.654 CC lib/nvmf/ctrlr_discovery.o 00:03:07.654 CC lib/nvmf/ctrlr_bdev.o 00:03:07.912 CC lib/scsi/scsi_pr.o 00:03:07.912 LIB libspdk_blobfs.a 00:03:07.912 CC lib/ftl/ftl_init.o 00:03:07.912 SO libspdk_blobfs.so.10.0 00:03:07.912 CC lib/nbd/nbd_rpc.o 00:03:08.171 SYMLINK libspdk_blobfs.so 00:03:08.171 CC lib/ftl/ftl_layout.o 00:03:08.171 LIB libspdk_lvol.a 00:03:08.171 SO libspdk_lvol.so.10.0 00:03:08.171 LIB libspdk_nbd.a 00:03:08.171 CC lib/ublk/ublk_rpc.o 00:03:08.171 SO libspdk_nbd.so.7.0 00:03:08.171 SYMLINK libspdk_lvol.so 00:03:08.171 CC lib/ftl/ftl_debug.o 00:03:08.171 CC lib/scsi/scsi_rpc.o 00:03:08.171 CC lib/nvmf/subsystem.o 00:03:08.171 CC lib/ftl/ftl_io.o 00:03:08.171 CC lib/scsi/task.o 00:03:08.171 SYMLINK libspdk_nbd.so 00:03:08.171 CC lib/nvmf/nvmf.o 00:03:08.429 LIB libspdk_ublk.a 00:03:08.429 CC lib/nvmf/nvmf_rpc.o 00:03:08.429 SO libspdk_ublk.so.3.0 00:03:08.429 CC lib/ftl/ftl_sb.o 00:03:08.429 SYMLINK libspdk_ublk.so 00:03:08.429 CC lib/nvmf/transport.o 00:03:08.429 LIB libspdk_scsi.a 00:03:08.429 CC lib/nvmf/tcp.o 00:03:08.689 CC lib/nvmf/stubs.o 00:03:08.689 SO libspdk_scsi.so.9.0 00:03:08.689 CC lib/ftl/ftl_l2p.o 00:03:08.689 CC lib/ftl/ftl_l2p_flat.o 00:03:08.689 SYMLINK libspdk_scsi.so 00:03:08.689 CC lib/ftl/ftl_nv_cache.o 00:03:08.951 CC lib/nvmf/mdns_server.o 00:03:08.951 CC lib/nvmf/rdma.o 00:03:09.209 CC lib/nvmf/auth.o 00:03:09.469 CC lib/ftl/ftl_band.o 00:03:09.469 CC lib/ftl/ftl_band_ops.o 00:03:09.469 CC lib/ftl/ftl_writer.o 00:03:09.469 CC lib/ftl/ftl_rq.o 00:03:09.746 CC lib/ftl/ftl_reloc.o 00:03:09.746 CC lib/ftl/ftl_l2p_cache.o 00:03:09.746 CC lib/ftl/ftl_p2l.o 00:03:09.746 CC lib/ftl/ftl_p2l_log.o 00:03:09.746 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.004 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:10.004 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:10.262 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:10.262 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.262 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.262 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.262 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.262 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.262 CC lib/iscsi/conn.o 00:03:10.521 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.521 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.521 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.521 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.521 CC lib/iscsi/init_grp.o 00:03:10.521 CC lib/iscsi/iscsi.o 00:03:10.778 CC lib/iscsi/param.o 00:03:10.778 CC lib/iscsi/portal_grp.o 00:03:10.778 CC lib/ftl/utils/ftl_conf.o 00:03:10.778 CC lib/iscsi/tgt_node.o 00:03:10.778 CC lib/iscsi/iscsi_subsystem.o 00:03:11.036 CC lib/iscsi/iscsi_rpc.o 00:03:11.036 CC lib/ftl/utils/ftl_md.o 00:03:11.036 CC lib/ftl/utils/ftl_mempool.o 00:03:11.036 CC lib/iscsi/task.o 00:03:11.036 CC lib/ftl/utils/ftl_bitmap.o 00:03:11.294 CC lib/ftl/utils/ftl_property.o 00:03:11.294 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:11.294 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:11.294 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:11.294 CC lib/vhost/vhost.o 00:03:11.552 CC lib/vhost/vhost_rpc.o 00:03:11.552 CC lib/vhost/vhost_scsi.o 00:03:11.552 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:11.552 CC lib/vhost/vhost_blk.o 00:03:11.552 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:11.552 CC lib/vhost/rte_vhost_user.o 00:03:11.552 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:11.811 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:11.811 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.811 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.069 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.069 LIB libspdk_nvmf.a 00:03:12.069 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.345 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:12.345 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:12.345 SO libspdk_nvmf.so.20.0 00:03:12.345 CC lib/ftl/base/ftl_base_dev.o 00:03:12.345 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.345 CC lib/ftl/ftl_trace.o 00:03:12.604 LIB libspdk_iscsi.a 00:03:12.604 SYMLINK libspdk_nvmf.so 00:03:12.604 SO libspdk_iscsi.so.8.0 00:03:12.863 LIB libspdk_ftl.a 00:03:12.863 SYMLINK libspdk_iscsi.so 00:03:13.122 LIB libspdk_vhost.a 00:03:13.122 SO libspdk_ftl.so.9.0 00:03:13.122 SO libspdk_vhost.so.8.0 00:03:13.380 SYMLINK libspdk_vhost.so 00:03:13.380 SYMLINK libspdk_ftl.so 00:03:13.638 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.897 CC module/keyring/linux/keyring.o 00:03:13.897 CC module/accel/dsa/accel_dsa.o 00:03:13.897 CC module/sock/posix/posix.o 00:03:13.897 CC module/fsdev/aio/fsdev_aio.o 00:03:13.897 CC module/keyring/file/keyring.o 00:03:13.897 CC module/blob/bdev/blob_bdev.o 00:03:13.897 CC module/accel/ioat/accel_ioat.o 00:03:13.897 CC module/accel/error/accel_error.o 00:03:13.897 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.897 LIB libspdk_env_dpdk_rpc.a 00:03:13.897 SO libspdk_env_dpdk_rpc.so.6.0 00:03:14.155 CC module/keyring/linux/keyring_rpc.o 00:03:14.155 SYMLINK libspdk_env_dpdk_rpc.so 00:03:14.155 CC module/accel/error/accel_error_rpc.o 00:03:14.155 CC module/keyring/file/keyring_rpc.o 00:03:14.155 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.155 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.155 LIB libspdk_scheduler_dynamic.a 00:03:14.155 LIB libspdk_keyring_linux.a 00:03:14.155 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.155 LIB libspdk_accel_error.a 00:03:14.155 LIB libspdk_blob_bdev.a 00:03:14.155 SO libspdk_keyring_linux.so.1.0 00:03:14.155 LIB libspdk_keyring_file.a 00:03:14.155 SO libspdk_accel_error.so.2.0 00:03:14.155 SO libspdk_blob_bdev.so.11.0 00:03:14.155 SO libspdk_keyring_file.so.2.0 00:03:14.155 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.416 SYMLINK libspdk_keyring_linux.so 00:03:14.416 LIB libspdk_accel_dsa.a 00:03:14.416 LIB libspdk_accel_ioat.a 00:03:14.416 SYMLINK libspdk_keyring_file.so 00:03:14.416 SYMLINK libspdk_accel_error.so 00:03:14.416 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:14.416 SYMLINK libspdk_blob_bdev.so 00:03:14.416 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.416 SO libspdk_accel_dsa.so.5.0 00:03:14.416 SO libspdk_accel_ioat.so.6.0 00:03:14.416 CC module/accel/iaa/accel_iaa.o 00:03:14.416 SYMLINK libspdk_accel_dsa.so 00:03:14.416 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.416 SYMLINK libspdk_accel_ioat.so 00:03:14.416 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.675 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.675 CC module/bdev/delay/vbdev_delay.o 00:03:14.675 CC module/bdev/error/vbdev_error.o 00:03:14.675 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.675 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.675 LIB libspdk_scheduler_gscheduler.a 00:03:14.675 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.675 CC module/bdev/gpt/gpt.o 00:03:14.675 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.675 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.675 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.675 LIB libspdk_accel_iaa.a 00:03:14.675 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.675 LIB libspdk_fsdev_aio.a 00:03:14.675 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.675 SO libspdk_accel_iaa.so.3.0 00:03:14.933 SO libspdk_fsdev_aio.so.1.0 00:03:14.933 LIB libspdk_sock_posix.a 00:03:14.933 SO libspdk_sock_posix.so.6.0 00:03:14.933 SYMLINK libspdk_accel_iaa.so 00:03:14.933 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.933 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.933 SYMLINK libspdk_fsdev_aio.so 00:03:14.933 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.933 SYMLINK libspdk_sock_posix.so 00:03:14.933 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.933 LIB libspdk_bdev_error.a 00:03:14.933 SO libspdk_bdev_error.so.6.0 00:03:15.286 LIB libspdk_blobfs_bdev.a 00:03:15.286 LIB libspdk_bdev_delay.a 00:03:15.286 CC module/bdev/null/bdev_null.o 00:03:15.286 SO libspdk_blobfs_bdev.so.6.0 00:03:15.286 SYMLINK libspdk_bdev_error.so 00:03:15.286 CC module/bdev/malloc/bdev_malloc.o 00:03:15.286 CC module/bdev/nvme/bdev_nvme.o 00:03:15.286 SO libspdk_bdev_delay.so.6.0 00:03:15.286 LIB libspdk_bdev_gpt.a 00:03:15.286 SYMLINK libspdk_blobfs_bdev.so 00:03:15.286 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.286 SYMLINK libspdk_bdev_delay.so 00:03:15.286 SO libspdk_bdev_gpt.so.6.0 00:03:15.286 CC module/bdev/null/bdev_null_rpc.o 00:03:15.286 SYMLINK libspdk_bdev_gpt.so 00:03:15.286 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.286 CC module/bdev/raid/bdev_raid.o 00:03:15.545 CC module/bdev/split/vbdev_split.o 00:03:15.545 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.545 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.545 LIB libspdk_bdev_null.a 00:03:15.545 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.545 LIB libspdk_bdev_lvol.a 00:03:15.545 SO libspdk_bdev_null.so.6.0 00:03:15.545 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.545 SO libspdk_bdev_lvol.so.6.0 00:03:15.545 CC module/bdev/raid/raid0.o 00:03:15.545 LIB libspdk_bdev_malloc.a 00:03:15.803 SYMLINK libspdk_bdev_null.so 00:03:15.803 CC module/bdev/raid/raid1.o 00:03:15.803 SO libspdk_bdev_malloc.so.6.0 00:03:15.803 LIB libspdk_bdev_split.a 00:03:15.803 SYMLINK libspdk_bdev_lvol.so 00:03:15.803 SO libspdk_bdev_split.so.6.0 00:03:15.803 SYMLINK libspdk_bdev_malloc.so 00:03:15.803 LIB libspdk_bdev_passthru.a 00:03:15.803 SO libspdk_bdev_passthru.so.6.0 00:03:15.803 SYMLINK libspdk_bdev_split.so 00:03:16.062 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.062 SYMLINK libspdk_bdev_passthru.so 00:03:16.062 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.062 CC module/bdev/nvme/nvme_rpc.o 00:03:16.062 CC module/bdev/aio/bdev_aio.o 00:03:16.062 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.062 CC module/bdev/ftl/bdev_ftl.o 00:03:16.062 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.062 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.062 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.320 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.320 CC module/bdev/nvme/vbdev_opal.o 00:03:16.320 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.320 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.320 LIB libspdk_bdev_aio.a 00:03:16.579 SO libspdk_bdev_aio.so.6.0 00:03:16.579 LIB libspdk_bdev_ftl.a 00:03:16.579 LIB libspdk_bdev_iscsi.a 00:03:16.579 SO libspdk_bdev_ftl.so.6.0 00:03:16.579 SYMLINK libspdk_bdev_aio.so 00:03:16.579 SO libspdk_bdev_iscsi.so.6.0 00:03:16.579 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.579 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.579 LIB libspdk_bdev_zone_block.a 00:03:16.579 SYMLINK libspdk_bdev_ftl.so 00:03:16.579 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.579 SO libspdk_bdev_zone_block.so.6.0 00:03:16.579 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.579 SYMLINK libspdk_bdev_iscsi.so 00:03:16.579 CC module/bdev/raid/concat.o 00:03:16.579 CC module/bdev/raid/raid5f.o 00:03:16.579 SYMLINK libspdk_bdev_zone_block.so 00:03:17.145 LIB libspdk_bdev_virtio.a 00:03:17.404 SO libspdk_bdev_virtio.so.6.0 00:03:17.404 LIB libspdk_bdev_raid.a 00:03:17.404 SYMLINK libspdk_bdev_virtio.so 00:03:17.404 SO libspdk_bdev_raid.so.6.0 00:03:17.662 SYMLINK libspdk_bdev_raid.so 00:03:19.066 LIB libspdk_bdev_nvme.a 00:03:19.066 SO libspdk_bdev_nvme.so.7.1 00:03:19.066 SYMLINK libspdk_bdev_nvme.so 00:03:19.633 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.633 CC module/event/subsystems/keyring/keyring.o 00:03:19.633 CC module/event/subsystems/fsdev/fsdev.o 00:03:19.633 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.633 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.633 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.633 CC module/event/subsystems/sock/sock.o 00:03:19.633 CC module/event/subsystems/vmd/vmd.o 00:03:19.633 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.892 LIB libspdk_event_keyring.a 00:03:19.892 LIB libspdk_event_vhost_blk.a 00:03:19.892 LIB libspdk_event_fsdev.a 00:03:19.892 LIB libspdk_event_vmd.a 00:03:19.892 LIB libspdk_event_scheduler.a 00:03:19.892 SO libspdk_event_keyring.so.1.0 00:03:19.892 SO libspdk_event_fsdev.so.1.0 00:03:19.892 LIB libspdk_event_iobuf.a 00:03:19.892 SO libspdk_event_vmd.so.6.0 00:03:19.892 SO libspdk_event_vhost_blk.so.3.0 00:03:19.892 SO libspdk_event_scheduler.so.4.0 00:03:19.892 LIB libspdk_event_sock.a 00:03:19.892 SO libspdk_event_iobuf.so.3.0 00:03:19.892 SYMLINK libspdk_event_fsdev.so 00:03:19.892 SYMLINK libspdk_event_keyring.so 00:03:19.892 SO libspdk_event_sock.so.5.0 00:03:19.892 SYMLINK libspdk_event_scheduler.so 00:03:19.892 SYMLINK libspdk_event_vhost_blk.so 00:03:19.892 SYMLINK libspdk_event_vmd.so 00:03:19.892 SYMLINK libspdk_event_iobuf.so 00:03:19.892 SYMLINK libspdk_event_sock.so 00:03:20.150 CC module/event/subsystems/accel/accel.o 00:03:20.409 LIB libspdk_event_accel.a 00:03:20.409 SO libspdk_event_accel.so.6.0 00:03:20.409 SYMLINK libspdk_event_accel.so 00:03:20.995 CC module/event/subsystems/bdev/bdev.o 00:03:20.995 LIB libspdk_event_bdev.a 00:03:21.257 SO libspdk_event_bdev.so.6.0 00:03:21.257 SYMLINK libspdk_event_bdev.so 00:03:21.515 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.515 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.515 CC module/event/subsystems/nbd/nbd.o 00:03:21.515 CC module/event/subsystems/ublk/ublk.o 00:03:21.515 CC module/event/subsystems/scsi/scsi.o 00:03:21.515 LIB libspdk_event_nbd.a 00:03:21.515 LIB libspdk_event_scsi.a 00:03:21.515 LIB libspdk_event_ublk.a 00:03:21.773 SO libspdk_event_nbd.so.6.0 00:03:21.773 SO libspdk_event_scsi.so.6.0 00:03:21.773 SO libspdk_event_ublk.so.3.0 00:03:21.773 SYMLINK libspdk_event_nbd.so 00:03:21.773 SYMLINK libspdk_event_scsi.so 00:03:21.773 SYMLINK libspdk_event_ublk.so 00:03:21.773 LIB libspdk_event_nvmf.a 00:03:21.773 SO libspdk_event_nvmf.so.6.0 00:03:21.773 SYMLINK libspdk_event_nvmf.so 00:03:22.031 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.031 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.290 LIB libspdk_event_vhost_scsi.a 00:03:22.290 SO libspdk_event_vhost_scsi.so.3.0 00:03:22.290 LIB libspdk_event_iscsi.a 00:03:22.290 SO libspdk_event_iscsi.so.6.0 00:03:22.290 SYMLINK libspdk_event_vhost_scsi.so 00:03:22.290 SYMLINK libspdk_event_iscsi.so 00:03:22.548 SO libspdk.so.6.0 00:03:22.548 SYMLINK libspdk.so 00:03:22.806 CXX app/trace/trace.o 00:03:22.806 CC app/trace_record/trace_record.o 00:03:22.806 TEST_HEADER include/spdk/accel.h 00:03:22.806 TEST_HEADER include/spdk/accel_module.h 00:03:22.806 TEST_HEADER include/spdk/assert.h 00:03:22.806 TEST_HEADER include/spdk/barrier.h 00:03:22.806 TEST_HEADER include/spdk/base64.h 00:03:22.806 TEST_HEADER include/spdk/bdev.h 00:03:22.806 TEST_HEADER include/spdk/bdev_module.h 00:03:22.806 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.806 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:22.806 TEST_HEADER include/spdk/bit_array.h 00:03:22.806 TEST_HEADER include/spdk/bit_pool.h 00:03:22.806 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.806 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.806 TEST_HEADER include/spdk/blobfs.h 00:03:22.806 TEST_HEADER include/spdk/blob.h 00:03:22.806 TEST_HEADER include/spdk/conf.h 00:03:22.806 TEST_HEADER include/spdk/config.h 00:03:22.806 TEST_HEADER include/spdk/cpuset.h 00:03:22.806 TEST_HEADER include/spdk/crc16.h 00:03:22.806 TEST_HEADER include/spdk/crc32.h 00:03:22.806 TEST_HEADER include/spdk/crc64.h 00:03:22.806 TEST_HEADER include/spdk/dif.h 00:03:22.806 TEST_HEADER include/spdk/dma.h 00:03:22.806 TEST_HEADER include/spdk/endian.h 00:03:22.806 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.806 TEST_HEADER include/spdk/env.h 00:03:22.806 TEST_HEADER include/spdk/event.h 00:03:22.806 TEST_HEADER include/spdk/fd_group.h 00:03:22.806 TEST_HEADER include/spdk/fd.h 00:03:22.806 TEST_HEADER include/spdk/file.h 00:03:22.806 TEST_HEADER include/spdk/fsdev.h 00:03:22.806 TEST_HEADER include/spdk/fsdev_module.h 00:03:22.806 TEST_HEADER include/spdk/ftl.h 00:03:22.806 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:22.806 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.806 TEST_HEADER include/spdk/hexlify.h 00:03:22.806 CC examples/ioat/perf/perf.o 00:03:22.806 TEST_HEADER include/spdk/histogram_data.h 00:03:22.806 CC test/thread/poller_perf/poller_perf.o 00:03:22.806 TEST_HEADER include/spdk/idxd.h 00:03:22.806 CC examples/util/zipf/zipf.o 00:03:22.806 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.806 TEST_HEADER include/spdk/init.h 00:03:22.806 TEST_HEADER include/spdk/ioat.h 00:03:22.806 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.806 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.806 TEST_HEADER include/spdk/json.h 00:03:22.806 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.806 TEST_HEADER include/spdk/keyring.h 00:03:22.806 TEST_HEADER include/spdk/keyring_module.h 00:03:22.806 TEST_HEADER include/spdk/likely.h 00:03:22.806 TEST_HEADER include/spdk/log.h 00:03:22.806 TEST_HEADER include/spdk/lvol.h 00:03:22.806 TEST_HEADER include/spdk/md5.h 00:03:22.806 TEST_HEADER include/spdk/memory.h 00:03:22.806 TEST_HEADER include/spdk/mmio.h 00:03:22.806 TEST_HEADER include/spdk/nbd.h 00:03:22.806 TEST_HEADER include/spdk/net.h 00:03:22.806 CC test/dma/test_dma/test_dma.o 00:03:22.806 TEST_HEADER include/spdk/notify.h 00:03:22.806 TEST_HEADER include/spdk/nvme.h 00:03:23.065 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.065 CC test/app/bdev_svc/bdev_svc.o 00:03:23.065 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.065 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.065 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.065 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.065 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.065 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.065 TEST_HEADER include/spdk/nvmf.h 00:03:23.065 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.065 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.065 TEST_HEADER include/spdk/opal.h 00:03:23.065 TEST_HEADER include/spdk/opal_spec.h 00:03:23.065 TEST_HEADER include/spdk/pci_ids.h 00:03:23.065 TEST_HEADER include/spdk/pipe.h 00:03:23.065 TEST_HEADER include/spdk/queue.h 00:03:23.065 TEST_HEADER include/spdk/reduce.h 00:03:23.065 TEST_HEADER include/spdk/rpc.h 00:03:23.065 TEST_HEADER include/spdk/scheduler.h 00:03:23.065 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.065 TEST_HEADER include/spdk/scsi.h 00:03:23.065 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.065 TEST_HEADER include/spdk/sock.h 00:03:23.065 TEST_HEADER include/spdk/stdinc.h 00:03:23.065 TEST_HEADER include/spdk/string.h 00:03:23.065 TEST_HEADER include/spdk/thread.h 00:03:23.065 TEST_HEADER include/spdk/trace.h 00:03:23.065 TEST_HEADER include/spdk/trace_parser.h 00:03:23.065 TEST_HEADER include/spdk/tree.h 00:03:23.065 TEST_HEADER include/spdk/ublk.h 00:03:23.065 TEST_HEADER include/spdk/util.h 00:03:23.065 TEST_HEADER include/spdk/uuid.h 00:03:23.065 LINK interrupt_tgt 00:03:23.065 TEST_HEADER include/spdk/version.h 00:03:23.065 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.065 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.065 TEST_HEADER include/spdk/vhost.h 00:03:23.065 TEST_HEADER include/spdk/vmd.h 00:03:23.065 TEST_HEADER include/spdk/xor.h 00:03:23.065 TEST_HEADER include/spdk/zipf.h 00:03:23.065 CXX test/cpp_headers/accel.o 00:03:23.065 LINK poller_perf 00:03:23.065 LINK spdk_trace_record 00:03:23.065 LINK zipf 00:03:23.065 LINK ioat_perf 00:03:23.065 LINK bdev_svc 00:03:23.363 LINK spdk_trace 00:03:23.363 CXX test/cpp_headers/accel_module.o 00:03:23.363 CXX test/cpp_headers/assert.o 00:03:23.363 CXX test/cpp_headers/barrier.o 00:03:23.363 CC test/app/histogram_perf/histogram_perf.o 00:03:23.363 CC examples/ioat/verify/verify.o 00:03:23.363 CXX test/cpp_headers/base64.o 00:03:23.363 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:23.621 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.621 CC test/app/jsoncat/jsoncat.o 00:03:23.621 LINK histogram_perf 00:03:23.621 LINK test_dma 00:03:23.621 CC test/app/stub/stub.o 00:03:23.621 CC app/nvmf_tgt/nvmf_main.o 00:03:23.621 CXX test/cpp_headers/bdev.o 00:03:23.621 LINK verify 00:03:23.621 LINK mem_callbacks 00:03:23.621 LINK jsoncat 00:03:23.880 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.880 LINK stub 00:03:23.880 LINK nvmf_tgt 00:03:23.880 CXX test/cpp_headers/bdev_module.o 00:03:23.880 CC test/env/vtophys/vtophys.o 00:03:23.880 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.138 LINK nvme_fuzz 00:03:24.138 CC examples/thread/thread/thread_ex.o 00:03:24.138 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.138 CC examples/sock/hello_world/hello_sock.o 00:03:24.138 LINK vtophys 00:03:24.138 CXX test/cpp_headers/bdev_zone.o 00:03:24.138 CC examples/vmd/led/led.o 00:03:24.138 CXX test/cpp_headers/bit_array.o 00:03:24.397 LINK lsvmd 00:03:24.397 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.397 LINK led 00:03:24.397 LINK thread 00:03:24.397 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.397 CXX test/cpp_headers/bit_pool.o 00:03:24.397 LINK hello_sock 00:03:24.397 CC test/env/memory/memory_ut.o 00:03:24.655 LINK iscsi_tgt 00:03:24.655 CC test/env/pci/pci_ut.o 00:03:24.655 LINK vhost_fuzz 00:03:24.655 LINK env_dpdk_post_init 00:03:24.655 CXX test/cpp_headers/blob_bdev.o 00:03:24.655 CC test/rpc_client/rpc_client_test.o 00:03:24.655 CC examples/idxd/perf/perf.o 00:03:24.914 CC app/spdk_lspci/spdk_lspci.o 00:03:24.914 CC app/spdk_tgt/spdk_tgt.o 00:03:24.914 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.914 LINK rpc_client_test 00:03:24.914 CC app/spdk_nvme_perf/perf.o 00:03:24.914 LINK spdk_lspci 00:03:24.914 CC test/accel/dif/dif.o 00:03:25.172 LINK spdk_tgt 00:03:25.172 CXX test/cpp_headers/blobfs.o 00:03:25.172 LINK pci_ut 00:03:25.172 LINK idxd_perf 00:03:25.172 CXX test/cpp_headers/blob.o 00:03:25.172 CC app/spdk_nvme_identify/identify.o 00:03:25.172 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:25.429 CC app/spdk_nvme_discover/discovery_aer.o 00:03:25.429 CC app/spdk_top/spdk_top.o 00:03:25.429 CXX test/cpp_headers/conf.o 00:03:25.687 CC examples/accel/perf/accel_perf.o 00:03:25.687 LINK hello_fsdev 00:03:25.687 LINK spdk_nvme_discover 00:03:25.687 CXX test/cpp_headers/config.o 00:03:25.687 CXX test/cpp_headers/cpuset.o 00:03:25.687 LINK iscsi_fuzz 00:03:25.982 CXX test/cpp_headers/crc16.o 00:03:25.982 CC app/vhost/vhost.o 00:03:25.982 LINK memory_ut 00:03:25.982 LINK dif 00:03:25.982 LINK spdk_nvme_perf 00:03:25.982 CC test/blobfs/mkfs/mkfs.o 00:03:25.982 CXX test/cpp_headers/crc32.o 00:03:26.240 LINK vhost 00:03:26.240 CC test/event/event_perf/event_perf.o 00:03:26.240 LINK accel_perf 00:03:26.240 CC test/event/reactor/reactor.o 00:03:26.240 CC test/event/reactor_perf/reactor_perf.o 00:03:26.240 CXX test/cpp_headers/crc64.o 00:03:26.240 LINK mkfs 00:03:26.240 CC test/event/app_repeat/app_repeat.o 00:03:26.498 LINK spdk_nvme_identify 00:03:26.498 LINK event_perf 00:03:26.498 LINK reactor 00:03:26.498 LINK reactor_perf 00:03:26.498 LINK app_repeat 00:03:26.498 CXX test/cpp_headers/dif.o 00:03:26.498 CXX test/cpp_headers/dma.o 00:03:26.498 CC app/spdk_dd/spdk_dd.o 00:03:26.498 CXX test/cpp_headers/endian.o 00:03:26.498 LINK spdk_top 00:03:26.756 CC examples/blob/hello_world/hello_blob.o 00:03:26.756 CC examples/blob/cli/blobcli.o 00:03:26.756 CC test/event/scheduler/scheduler.o 00:03:26.756 CXX test/cpp_headers/env_dpdk.o 00:03:26.756 CXX test/cpp_headers/env.o 00:03:26.756 CC examples/nvme/hello_world/hello_world.o 00:03:26.756 CC examples/nvme/reconnect/reconnect.o 00:03:26.756 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.015 CC test/lvol/esnap/esnap.o 00:03:27.015 LINK hello_blob 00:03:27.015 CXX test/cpp_headers/event.o 00:03:27.015 CXX test/cpp_headers/fd_group.o 00:03:27.015 LINK scheduler 00:03:27.015 LINK spdk_dd 00:03:27.015 LINK hello_world 00:03:27.273 CXX test/cpp_headers/fd.o 00:03:27.273 CXX test/cpp_headers/file.o 00:03:27.273 LINK reconnect 00:03:27.273 CXX test/cpp_headers/fsdev.o 00:03:27.273 CC examples/nvme/arbitration/arbitration.o 00:03:27.273 LINK blobcli 00:03:27.273 CC app/fio/nvme/fio_plugin.o 00:03:27.273 CC test/nvme/aer/aer.o 00:03:27.531 CXX test/cpp_headers/fsdev_module.o 00:03:27.531 LINK nvme_manage 00:03:27.531 CC test/nvme/reset/reset.o 00:03:27.531 CC examples/nvme/hotplug/hotplug.o 00:03:27.531 CC app/fio/bdev/fio_plugin.o 00:03:27.789 CXX test/cpp_headers/ftl.o 00:03:27.789 LINK aer 00:03:27.789 LINK arbitration 00:03:27.789 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:27.789 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.789 LINK hotplug 00:03:27.789 LINK reset 00:03:27.789 CXX test/cpp_headers/fuse_dispatcher.o 00:03:28.047 LINK cmb_copy 00:03:28.047 CC test/nvme/sgl/sgl.o 00:03:28.047 CXX test/cpp_headers/gpt_spec.o 00:03:28.047 CC test/nvme/e2edp/nvme_dp.o 00:03:28.047 LINK hello_bdev 00:03:28.047 LINK spdk_nvme 00:03:28.047 CC test/nvme/overhead/overhead.o 00:03:28.307 LINK spdk_bdev 00:03:28.307 CXX test/cpp_headers/hexlify.o 00:03:28.307 CC test/bdev/bdevio/bdevio.o 00:03:28.307 CC examples/nvme/abort/abort.o 00:03:28.307 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.307 CXX test/cpp_headers/histogram_data.o 00:03:28.307 LINK sgl 00:03:28.307 LINK nvme_dp 00:03:28.565 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.565 CC test/nvme/err_injection/err_injection.o 00:03:28.565 CXX test/cpp_headers/idxd.o 00:03:28.565 LINK pmr_persistence 00:03:28.565 LINK overhead 00:03:28.565 CC test/nvme/startup/startup.o 00:03:28.565 CC test/nvme/reserve/reserve.o 00:03:28.823 LINK bdevio 00:03:28.823 CXX test/cpp_headers/idxd_spec.o 00:03:28.823 LINK err_injection 00:03:28.823 LINK abort 00:03:28.823 CXX test/cpp_headers/init.o 00:03:28.823 CC test/nvme/simple_copy/simple_copy.o 00:03:28.823 LINK startup 00:03:28.823 LINK reserve 00:03:28.823 CXX test/cpp_headers/ioat.o 00:03:29.081 CC test/nvme/boot_partition/boot_partition.o 00:03:29.081 CC test/nvme/connect_stress/connect_stress.o 00:03:29.081 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.081 CC test/nvme/compliance/nvme_compliance.o 00:03:29.081 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:29.081 LINK simple_copy 00:03:29.081 CXX test/cpp_headers/ioat_spec.o 00:03:29.081 CC test/nvme/fdp/fdp.o 00:03:29.081 LINK boot_partition 00:03:29.405 LINK connect_stress 00:03:29.405 LINK fused_ordering 00:03:29.405 CXX test/cpp_headers/iscsi_spec.o 00:03:29.405 LINK doorbell_aers 00:03:29.405 CXX test/cpp_headers/json.o 00:03:29.405 CXX test/cpp_headers/jsonrpc.o 00:03:29.405 CC test/nvme/cuse/cuse.o 00:03:29.405 LINK nvme_compliance 00:03:29.405 CXX test/cpp_headers/keyring.o 00:03:29.405 CXX test/cpp_headers/keyring_module.o 00:03:29.405 CXX test/cpp_headers/likely.o 00:03:29.405 LINK bdevperf 00:03:29.667 CXX test/cpp_headers/log.o 00:03:29.667 LINK fdp 00:03:29.667 CXX test/cpp_headers/lvol.o 00:03:29.667 CXX test/cpp_headers/md5.o 00:03:29.667 CXX test/cpp_headers/memory.o 00:03:29.667 CXX test/cpp_headers/mmio.o 00:03:29.667 CXX test/cpp_headers/nbd.o 00:03:29.667 CXX test/cpp_headers/net.o 00:03:29.667 CXX test/cpp_headers/notify.o 00:03:29.667 CXX test/cpp_headers/nvme.o 00:03:29.926 CXX test/cpp_headers/nvme_intel.o 00:03:29.926 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.926 CXX test/cpp_headers/nvme_spec.o 00:03:29.926 CXX test/cpp_headers/nvme_zns.o 00:03:29.926 CXX test/cpp_headers/nvmf_cmd.o 00:03:29.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:29.926 CC examples/nvmf/nvmf/nvmf.o 00:03:29.926 CXX test/cpp_headers/nvmf.o 00:03:30.184 CXX test/cpp_headers/nvmf_spec.o 00:03:30.184 CXX test/cpp_headers/nvmf_transport.o 00:03:30.184 CXX test/cpp_headers/opal.o 00:03:30.184 CXX test/cpp_headers/opal_spec.o 00:03:30.184 CXX test/cpp_headers/pci_ids.o 00:03:30.184 CXX test/cpp_headers/pipe.o 00:03:30.184 CXX test/cpp_headers/queue.o 00:03:30.184 CXX test/cpp_headers/reduce.o 00:03:30.184 CXX test/cpp_headers/rpc.o 00:03:30.442 CXX test/cpp_headers/scheduler.o 00:03:30.442 CXX test/cpp_headers/scsi.o 00:03:30.443 CXX test/cpp_headers/scsi_spec.o 00:03:30.443 CXX test/cpp_headers/sock.o 00:03:30.443 CXX test/cpp_headers/stdinc.o 00:03:30.443 LINK nvmf 00:03:30.443 CXX test/cpp_headers/string.o 00:03:30.443 CXX test/cpp_headers/thread.o 00:03:30.700 CXX test/cpp_headers/trace.o 00:03:30.700 CXX test/cpp_headers/trace_parser.o 00:03:30.700 CXX test/cpp_headers/tree.o 00:03:30.700 CXX test/cpp_headers/ublk.o 00:03:30.700 CXX test/cpp_headers/util.o 00:03:30.700 CXX test/cpp_headers/uuid.o 00:03:30.700 CXX test/cpp_headers/version.o 00:03:30.700 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.700 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.700 CXX test/cpp_headers/vhost.o 00:03:30.700 CXX test/cpp_headers/vmd.o 00:03:30.700 CXX test/cpp_headers/xor.o 00:03:30.700 CXX test/cpp_headers/zipf.o 00:03:31.265 LINK cuse 00:03:34.572 LINK esnap 00:03:34.572 00:03:34.572 real 1m41.611s 00:03:34.572 user 9m15.339s 00:03:34.572 sys 1m47.282s 00:03:34.572 12:34:23 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:34.572 12:34:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:34.572 ************************************ 00:03:34.572 END TEST make 00:03:34.572 ************************************ 00:03:34.572 12:34:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:34.572 12:34:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:34.572 12:34:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:34.572 12:34:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.572 12:34:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:34.572 12:34:23 -- pm/common@44 -- $ pid=5255 00:03:34.572 12:34:23 -- pm/common@50 -- $ kill -TERM 5255 00:03:34.572 12:34:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.572 12:34:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:34.572 12:34:23 -- pm/common@44 -- $ pid=5257 00:03:34.572 12:34:23 -- pm/common@50 -- $ kill -TERM 5257 00:03:34.572 12:34:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:34.572 12:34:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:34.831 12:34:23 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:34.831 12:34:23 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:34.831 12:34:23 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:34.831 12:34:23 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:34.831 12:34:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.831 12:34:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.831 12:34:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.831 12:34:23 -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.831 12:34:23 -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.831 12:34:23 -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.831 12:34:23 -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.831 12:34:23 -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.831 12:34:23 -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.831 12:34:23 -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.831 12:34:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.831 12:34:23 -- scripts/common.sh@344 -- # case "$op" in 00:03:34.831 12:34:23 -- scripts/common.sh@345 -- # : 1 00:03:34.831 12:34:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.831 12:34:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.831 12:34:23 -- scripts/common.sh@365 -- # decimal 1 00:03:34.831 12:34:23 -- scripts/common.sh@353 -- # local d=1 00:03:34.831 12:34:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.831 12:34:23 -- scripts/common.sh@355 -- # echo 1 00:03:34.831 12:34:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.831 12:34:23 -- scripts/common.sh@366 -- # decimal 2 00:03:34.831 12:34:23 -- scripts/common.sh@353 -- # local d=2 00:03:34.831 12:34:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.831 12:34:23 -- scripts/common.sh@355 -- # echo 2 00:03:34.831 12:34:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.831 12:34:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.831 12:34:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.831 12:34:23 -- scripts/common.sh@368 -- # return 0 00:03:34.831 12:34:23 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.831 12:34:23 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:34.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.831 --rc genhtml_branch_coverage=1 00:03:34.832 --rc genhtml_function_coverage=1 00:03:34.832 --rc genhtml_legend=1 00:03:34.832 --rc geninfo_all_blocks=1 00:03:34.832 --rc geninfo_unexecuted_blocks=1 00:03:34.832 00:03:34.832 ' 00:03:34.832 12:34:23 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.832 --rc genhtml_branch_coverage=1 00:03:34.832 --rc genhtml_function_coverage=1 00:03:34.832 --rc genhtml_legend=1 00:03:34.832 --rc geninfo_all_blocks=1 00:03:34.832 --rc geninfo_unexecuted_blocks=1 00:03:34.832 00:03:34.832 ' 00:03:34.832 12:34:23 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.832 --rc genhtml_branch_coverage=1 00:03:34.832 --rc genhtml_function_coverage=1 00:03:34.832 --rc genhtml_legend=1 00:03:34.832 --rc geninfo_all_blocks=1 00:03:34.832 --rc geninfo_unexecuted_blocks=1 00:03:34.832 00:03:34.832 ' 00:03:34.832 12:34:23 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:34.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.832 --rc genhtml_branch_coverage=1 00:03:34.832 --rc genhtml_function_coverage=1 00:03:34.832 --rc genhtml_legend=1 00:03:34.832 --rc geninfo_all_blocks=1 00:03:34.832 --rc geninfo_unexecuted_blocks=1 00:03:34.832 00:03:34.832 ' 00:03:34.832 12:34:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:34.832 12:34:23 -- nvmf/common.sh@7 -- # uname -s 00:03:34.832 12:34:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:34.832 12:34:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:34.832 12:34:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:34.832 12:34:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:34.832 12:34:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:34.832 12:34:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:34.832 12:34:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:34.832 12:34:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:34.832 12:34:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:34.832 12:34:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:34.832 12:34:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7b06248-f3b3-4d29-8cee-a1767ec92231 00:03:34.832 12:34:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=a7b06248-f3b3-4d29-8cee-a1767ec92231 00:03:34.832 12:34:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:34.832 12:34:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:34.832 12:34:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:34.832 12:34:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:34.832 12:34:23 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:34.832 12:34:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:34.832 12:34:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:34.832 12:34:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:34.832 12:34:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:34.832 12:34:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.832 12:34:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.832 12:34:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.832 12:34:23 -- paths/export.sh@5 -- # export PATH 00:03:34.832 12:34:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:34.832 12:34:23 -- nvmf/common.sh@51 -- # : 0 00:03:34.832 12:34:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:34.832 12:34:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:34.832 12:34:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:34.832 12:34:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:34.832 12:34:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:34.832 12:34:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:34.832 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:34.832 12:34:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:34.832 12:34:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:34.832 12:34:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:34.832 12:34:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:34.832 12:34:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:34.832 12:34:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:34.832 12:34:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:34.832 12:34:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.832 12:34:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:34.832 12:34:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:34.832 12:34:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:34.832 12:34:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:34.832 12:34:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:34.832 12:34:23 -- spdk/autotest.sh@48 -- # udevadm_pid=54375 00:03:34.832 12:34:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:34.832 12:34:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:34.832 12:34:23 -- pm/common@17 -- # local monitor 00:03:34.832 12:34:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.832 12:34:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:34.832 12:34:23 -- pm/common@25 -- # sleep 1 00:03:34.832 12:34:23 -- pm/common@21 -- # date +%s 00:03:34.832 12:34:23 -- pm/common@21 -- # date +%s 00:03:34.832 12:34:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730896463 00:03:35.092 12:34:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730896463 00:03:35.092 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730896463_collect-cpu-load.pm.log 00:03:35.092 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730896463_collect-vmstat.pm.log 00:03:36.028 12:34:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:36.028 12:34:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:36.028 12:34:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:36.028 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:03:36.028 12:34:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:36.028 12:34:24 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:36.028 12:34:24 -- common/autotest_common.sh@10 -- # set +x 00:03:36.028 12:34:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:36.028 12:34:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:36.028 12:34:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:36.028 12:34:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:36.028 12:34:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:36.028 12:34:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:36.028 12:34:24 -- common/autotest_common.sh@1455 -- # uname 00:03:36.028 12:34:24 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:36.028 12:34:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:36.028 12:34:24 -- common/autotest_common.sh@1475 -- # uname 00:03:36.028 12:34:24 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:36.028 12:34:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:36.028 12:34:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:36.028 lcov: LCOV version 1.15 00:03:36.028 12:34:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:54.229 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.229 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.326 12:35:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:12.326 12:35:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.326 12:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:12.326 12:35:00 -- spdk/autotest.sh@78 -- # rm -f 00:04:12.326 12:35:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.585 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:12.585 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:12.585 12:35:01 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.585 12:35:01 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:12.585 12:35:01 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:12.585 12:35:01 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:12.585 12:35:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.585 12:35:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:12.585 12:35:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:12.585 12:35:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.585 12:35:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:12.585 12:35:01 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:12.585 12:35:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.585 12:35:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:12.585 12:35:01 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:12.585 12:35:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.585 12:35:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:12.585 12:35:01 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:12.585 12:35:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:12.585 12:35:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.585 12:35:01 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.585 12:35:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.585 12:35:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.585 12:35:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.585 12:35:01 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.585 12:35:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.844 No valid GPT data, bailing 00:04:12.844 12:35:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.844 12:35:01 -- scripts/common.sh@394 -- # pt= 00:04:12.844 12:35:01 -- scripts/common.sh@395 -- # return 1 00:04:12.844 12:35:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.844 1+0 records in 00:04:12.844 1+0 records out 00:04:12.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468194 s, 224 MB/s 00:04:12.844 12:35:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.844 12:35:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.844 12:35:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:12.844 12:35:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:12.844 12:35:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:12.844 No valid GPT data, bailing 00:04:12.844 12:35:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.844 12:35:01 -- scripts/common.sh@394 -- # pt= 00:04:12.844 12:35:01 -- scripts/common.sh@395 -- # return 1 00:04:12.844 12:35:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.844 1+0 records in 00:04:12.844 1+0 records out 00:04:12.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472713 s, 222 MB/s 00:04:12.844 12:35:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.844 12:35:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.844 12:35:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:12.844 12:35:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:12.844 12:35:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:12.844 No valid GPT data, bailing 00:04:12.844 12:35:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:12.844 12:35:01 -- scripts/common.sh@394 -- # pt= 00:04:12.844 12:35:01 -- scripts/common.sh@395 -- # return 1 00:04:12.844 12:35:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:12.844 1+0 records in 00:04:12.844 1+0 records out 00:04:12.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052087 s, 201 MB/s 00:04:12.844 12:35:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.844 12:35:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.844 12:35:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:12.844 12:35:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:12.844 12:35:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:13.102 No valid GPT data, bailing 00:04:13.102 12:35:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:13.102 12:35:01 -- scripts/common.sh@394 -- # pt= 00:04:13.102 12:35:01 -- scripts/common.sh@395 -- # return 1 00:04:13.102 12:35:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:13.102 1+0 records in 00:04:13.102 1+0 records out 00:04:13.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487708 s, 215 MB/s 00:04:13.102 12:35:01 -- spdk/autotest.sh@105 -- # sync 00:04:13.102 12:35:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:13.102 12:35:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:13.102 12:35:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:15.002 12:35:03 -- spdk/autotest.sh@111 -- # uname -s 00:04:15.002 12:35:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:15.002 12:35:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:15.002 12:35:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:15.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.937 Hugepages 00:04:15.937 node hugesize free / total 00:04:15.937 node0 1048576kB 0 / 0 00:04:15.937 node0 2048kB 0 / 0 00:04:15.937 00:04:15.937 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.937 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:15.937 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:15.937 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:15.937 12:35:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.937 12:35:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.937 12:35:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.937 12:35:04 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.761 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.761 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.761 12:35:05 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:18.135 12:35:06 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:18.135 12:35:06 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:18.135 12:35:06 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.135 12:35:06 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:18.135 12:35:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:18.135 12:35:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:18.135 12:35:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.135 12:35:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:18.135 12:35:06 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.135 12:35:06 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:18.135 12:35:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:18.135 12:35:06 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.135 Waiting for block devices as requested 00:04:18.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.394 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.394 12:35:07 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:18.394 12:35:07 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:18.394 12:35:07 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:18.394 12:35:07 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:18.394 12:35:07 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:18.394 12:35:07 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:18.394 12:35:07 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:18.394 12:35:07 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:18.394 12:35:07 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:18.394 12:35:07 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:18.394 12:35:07 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:18.394 12:35:07 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:18.394 12:35:07 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:18.394 12:35:07 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:18.394 12:35:07 -- common/autotest_common.sh@1541 -- # continue 00:04:18.394 12:35:07 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:18.394 12:35:07 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:18.394 12:35:07 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:18.394 12:35:07 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:18.394 12:35:07 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:18.394 12:35:07 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:18.394 12:35:07 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:18.394 12:35:07 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:18.394 12:35:07 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:18.394 12:35:07 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:18.653 12:35:07 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:18.653 12:35:07 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:18.653 12:35:07 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:18.653 12:35:07 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:18.653 12:35:07 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:18.653 12:35:07 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:18.653 12:35:07 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:18.653 12:35:07 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:18.653 12:35:07 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:18.653 12:35:07 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:18.653 12:35:07 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:18.653 12:35:07 -- common/autotest_common.sh@1541 -- # continue 00:04:18.653 12:35:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:18.653 12:35:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.653 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 12:35:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.653 12:35:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.653 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 12:35:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.218 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.476 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.476 12:35:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:19.476 12:35:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.476 12:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:19.476 12:35:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:19.476 12:35:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:19.476 12:35:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.476 12:35:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:19.476 12:35:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:19.476 12:35:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:19.476 12:35:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:19.476 12:35:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:19.476 12:35:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:19.476 12:35:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:19.476 12:35:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.476 12:35:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.476 12:35:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:19.476 12:35:08 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:19.476 12:35:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:19.476 12:35:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:19.476 12:35:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:19.476 12:35:08 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:19.476 12:35:08 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.476 12:35:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:19.476 12:35:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:19.476 12:35:08 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:19.476 12:35:08 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.476 12:35:08 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:19.476 12:35:08 -- common/autotest_common.sh@1570 -- # return 0 00:04:19.476 12:35:08 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:19.476 12:35:08 -- common/autotest_common.sh@1578 -- # return 0 00:04:19.476 12:35:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:19.476 12:35:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:19.476 12:35:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.476 12:35:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.476 12:35:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:19.476 12:35:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.476 12:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:19.476 12:35:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:19.476 12:35:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.476 12:35:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.476 12:35:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.476 12:35:08 -- common/autotest_common.sh@10 -- # set +x 00:04:19.476 ************************************ 00:04:19.476 START TEST env 00:04:19.476 ************************************ 00:04:19.476 12:35:08 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.736 * Looking for test storage... 00:04:19.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:19.736 12:35:08 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.736 12:35:08 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.736 12:35:08 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.736 12:35:08 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.736 12:35:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.736 12:35:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.736 12:35:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.736 12:35:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.736 12:35:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.736 12:35:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.736 12:35:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.736 12:35:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.736 12:35:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.736 12:35:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.736 12:35:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.736 12:35:08 env -- scripts/common.sh@344 -- # case "$op" in 00:04:19.736 12:35:08 env -- scripts/common.sh@345 -- # : 1 00:04:19.736 12:35:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.736 12:35:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.736 12:35:08 env -- scripts/common.sh@365 -- # decimal 1 00:04:19.736 12:35:08 env -- scripts/common.sh@353 -- # local d=1 00:04:19.736 12:35:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.737 12:35:08 env -- scripts/common.sh@355 -- # echo 1 00:04:19.737 12:35:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.737 12:35:08 env -- scripts/common.sh@366 -- # decimal 2 00:04:19.737 12:35:08 env -- scripts/common.sh@353 -- # local d=2 00:04:19.737 12:35:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.737 12:35:08 env -- scripts/common.sh@355 -- # echo 2 00:04:19.737 12:35:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.737 12:35:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.737 12:35:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.737 12:35:08 env -- scripts/common.sh@368 -- # return 0 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.737 --rc genhtml_branch_coverage=1 00:04:19.737 --rc genhtml_function_coverage=1 00:04:19.737 --rc genhtml_legend=1 00:04:19.737 --rc geninfo_all_blocks=1 00:04:19.737 --rc geninfo_unexecuted_blocks=1 00:04:19.737 00:04:19.737 ' 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.737 --rc genhtml_branch_coverage=1 00:04:19.737 --rc genhtml_function_coverage=1 00:04:19.737 --rc genhtml_legend=1 00:04:19.737 --rc geninfo_all_blocks=1 00:04:19.737 --rc geninfo_unexecuted_blocks=1 00:04:19.737 00:04:19.737 ' 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.737 --rc genhtml_branch_coverage=1 00:04:19.737 --rc genhtml_function_coverage=1 00:04:19.737 --rc genhtml_legend=1 00:04:19.737 --rc geninfo_all_blocks=1 00:04:19.737 --rc geninfo_unexecuted_blocks=1 00:04:19.737 00:04:19.737 ' 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.737 --rc genhtml_branch_coverage=1 00:04:19.737 --rc genhtml_function_coverage=1 00:04:19.737 --rc genhtml_legend=1 00:04:19.737 --rc geninfo_all_blocks=1 00:04:19.737 --rc geninfo_unexecuted_blocks=1 00:04:19.737 00:04:19.737 ' 00:04:19.737 12:35:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.737 12:35:08 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.737 12:35:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.737 ************************************ 00:04:19.737 START TEST env_memory 00:04:19.737 ************************************ 00:04:19.737 12:35:08 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:19.737 00:04:19.737 00:04:19.737 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.737 http://cunit.sourceforge.net/ 00:04:19.737 00:04:19.737 00:04:19.737 Suite: memory 00:04:19.737 Test: alloc and free memory map ...[2024-11-06 12:35:08.386399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.005 passed 00:04:20.005 Test: mem map translation ...[2024-11-06 12:35:08.455126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.005 [2024-11-06 12:35:08.455257] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.005 [2024-11-06 12:35:08.455379] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.005 [2024-11-06 12:35:08.455416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.005 passed 00:04:20.005 Test: mem map registration ...[2024-11-06 12:35:08.555938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.005 [2024-11-06 12:35:08.556074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.005 passed 00:04:20.277 Test: mem map adjacent registrations ...passed 00:04:20.277 00:04:20.277 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.277 suites 1 1 n/a 0 0 00:04:20.277 tests 4 4 4 0 0 00:04:20.277 asserts 152 152 152 0 n/a 00:04:20.277 00:04:20.277 Elapsed time = 0.370 seconds 00:04:20.277 00:04:20.277 real 0m0.406s 00:04:20.277 user 0m0.377s 00:04:20.277 sys 0m0.022s 00:04:20.277 12:35:08 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.277 12:35:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.277 ************************************ 00:04:20.277 END TEST env_memory 00:04:20.277 ************************************ 00:04:20.277 12:35:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.277 12:35:08 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.277 12:35:08 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.277 12:35:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.277 ************************************ 00:04:20.277 START TEST env_vtophys 00:04:20.277 ************************************ 00:04:20.277 12:35:08 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.277 EAL: lib.eal log level changed from notice to debug 00:04:20.277 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 1 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 2 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 3 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 4 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 5 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 6 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 7 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 8 as core 0 on socket 0 00:04:20.277 EAL: Detected lcore 9 as core 0 on socket 0 00:04:20.277 EAL: Maximum logical cores by configuration: 128 00:04:20.277 EAL: Detected CPU lcores: 10 00:04:20.277 EAL: Detected NUMA nodes: 1 00:04:20.277 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.277 EAL: Detected shared linkage of DPDK 00:04:20.277 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.277 EAL: Selected IOVA mode 'PA' 00:04:20.277 EAL: Probing VFIO support... 00:04:20.277 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.277 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:20.277 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.277 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.277 EAL: Setting up physically contiguous memory... 00:04:20.277 EAL: Setting maximum number of open files to 524288 00:04:20.277 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.277 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.277 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.277 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.278 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.278 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.278 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.278 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.278 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.278 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.278 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.278 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.278 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.278 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.278 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.278 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.278 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.278 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.278 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.278 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.278 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.278 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.278 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.278 EAL: Hugepages will be freed exactly as allocated. 00:04:20.278 EAL: No shared files mode enabled, IPC is disabled 00:04:20.278 EAL: No shared files mode enabled, IPC is disabled 00:04:20.536 EAL: TSC frequency is ~2200000 KHz 00:04:20.536 EAL: Main lcore 0 is ready (tid=7fc9c48d4a40;cpuset=[0]) 00:04:20.536 EAL: Trying to obtain current memory policy. 00:04:20.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.536 EAL: Restoring previous memory policy: 0 00:04:20.536 EAL: request: mp_malloc_sync 00:04:20.536 EAL: No shared files mode enabled, IPC is disabled 00:04:20.536 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.536 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.536 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.536 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.536 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:20.536 00:04:20.536 00:04:20.536 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.536 http://cunit.sourceforge.net/ 00:04:20.536 00:04:20.536 00:04:20.536 Suite: components_suite 00:04:20.794 Test: vtophys_malloc_test ...passed 00:04:20.794 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.794 EAL: Restoring previous memory policy: 4 00:04:20.794 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.794 EAL: request: mp_malloc_sync 00:04:20.794 EAL: No shared files mode enabled, IPC is disabled 00:04:20.794 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.794 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.794 EAL: request: mp_malloc_sync 00:04:20.794 EAL: No shared files mode enabled, IPC is disabled 00:04:20.794 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.795 EAL: Trying to obtain current memory policy. 00:04:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.795 EAL: Restoring previous memory policy: 4 00:04:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.795 EAL: request: mp_malloc_sync 00:04:20.795 EAL: No shared files mode enabled, IPC is disabled 00:04:20.795 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.795 EAL: request: mp_malloc_sync 00:04:20.795 EAL: No shared files mode enabled, IPC is disabled 00:04:20.795 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.795 EAL: Trying to obtain current memory policy. 00:04:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.795 EAL: Restoring previous memory policy: 4 00:04:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.795 EAL: request: mp_malloc_sync 00:04:20.795 EAL: No shared files mode enabled, IPC is disabled 00:04:20.795 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.054 EAL: request: mp_malloc_sync 00:04:21.054 EAL: No shared files mode enabled, IPC is disabled 00:04:21.054 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.054 EAL: Trying to obtain current memory policy. 00:04:21.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.054 EAL: Restoring previous memory policy: 4 00:04:21.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.054 EAL: request: mp_malloc_sync 00:04:21.054 EAL: No shared files mode enabled, IPC is disabled 00:04:21.054 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.054 EAL: request: mp_malloc_sync 00:04:21.054 EAL: No shared files mode enabled, IPC is disabled 00:04:21.054 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.054 EAL: Trying to obtain current memory policy. 00:04:21.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.054 EAL: Restoring previous memory policy: 4 00:04:21.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.054 EAL: request: mp_malloc_sync 00:04:21.054 EAL: No shared files mode enabled, IPC is disabled 00:04:21.054 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.054 EAL: request: mp_malloc_sync 00:04:21.054 EAL: No shared files mode enabled, IPC is disabled 00:04:21.054 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.054 EAL: Trying to obtain current memory policy. 00:04:21.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.054 EAL: Restoring previous memory policy: 4 00:04:21.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.054 EAL: request: mp_malloc_sync 00:04:21.054 EAL: No shared files mode enabled, IPC is disabled 00:04:21.054 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.314 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.314 EAL: request: mp_malloc_sync 00:04:21.314 EAL: No shared files mode enabled, IPC is disabled 00:04:21.314 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.314 EAL: Trying to obtain current memory policy. 00:04:21.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.314 EAL: Restoring previous memory policy: 4 00:04:21.314 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.314 EAL: request: mp_malloc_sync 00:04:21.314 EAL: No shared files mode enabled, IPC is disabled 00:04:21.314 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.574 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.574 EAL: request: mp_malloc_sync 00:04:21.574 EAL: No shared files mode enabled, IPC is disabled 00:04:21.574 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.834 EAL: Trying to obtain current memory policy. 00:04:21.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.834 EAL: Restoring previous memory policy: 4 00:04:21.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.834 EAL: request: mp_malloc_sync 00:04:21.834 EAL: No shared files mode enabled, IPC is disabled 00:04:21.834 EAL: Heap on socket 0 was expanded by 258MB 00:04:22.401 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.401 EAL: request: mp_malloc_sync 00:04:22.401 EAL: No shared files mode enabled, IPC is disabled 00:04:22.401 EAL: Heap on socket 0 was shrunk by 258MB 00:04:22.661 EAL: Trying to obtain current memory policy. 00:04:22.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.920 EAL: Restoring previous memory policy: 4 00:04:22.920 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.920 EAL: request: mp_malloc_sync 00:04:22.920 EAL: No shared files mode enabled, IPC is disabled 00:04:22.920 EAL: Heap on socket 0 was expanded by 514MB 00:04:23.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.853 EAL: request: mp_malloc_sync 00:04:23.853 EAL: No shared files mode enabled, IPC is disabled 00:04:23.853 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.419 EAL: Trying to obtain current memory policy. 00:04:24.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.676 EAL: Restoring previous memory policy: 4 00:04:24.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.676 EAL: request: mp_malloc_sync 00:04:24.676 EAL: No shared files mode enabled, IPC is disabled 00:04:24.676 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.658 EAL: request: mp_malloc_sync 00:04:26.658 EAL: No shared files mode enabled, IPC is disabled 00:04:26.658 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.032 passed 00:04:28.032 00:04:28.032 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.032 suites 1 1 n/a 0 0 00:04:28.032 tests 2 2 2 0 0 00:04:28.032 asserts 5663 5663 5663 0 n/a 00:04:28.032 00:04:28.032 Elapsed time = 7.460 seconds 00:04:28.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.032 EAL: request: mp_malloc_sync 00:04:28.032 EAL: No shared files mode enabled, IPC is disabled 00:04:28.032 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.032 EAL: No shared files mode enabled, IPC is disabled 00:04:28.032 EAL: No shared files mode enabled, IPC is disabled 00:04:28.032 EAL: No shared files mode enabled, IPC is disabled 00:04:28.032 00:04:28.032 real 0m7.788s 00:04:28.032 user 0m6.625s 00:04:28.032 sys 0m0.995s 00:04:28.032 12:35:16 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.032 12:35:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.032 ************************************ 00:04:28.032 END TEST env_vtophys 00:04:28.032 ************************************ 00:04:28.032 12:35:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.032 12:35:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.032 12:35:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.032 12:35:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.032 ************************************ 00:04:28.032 START TEST env_pci 00:04:28.032 ************************************ 00:04:28.032 12:35:16 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.032 00:04:28.032 00:04:28.032 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.032 http://cunit.sourceforge.net/ 00:04:28.032 00:04:28.032 00:04:28.032 Suite: pci 00:04:28.032 Test: pci_hook ...[2024-11-06 12:35:16.635226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56718 has claimed it 00:04:28.033 passed 00:04:28.033 00:04:28.033 EAL: Cannot find device (10000:00:01.0) 00:04:28.033 EAL: Failed to attach device on primary process 00:04:28.033 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.033 suites 1 1 n/a 0 0 00:04:28.033 tests 1 1 1 0 0 00:04:28.033 asserts 25 25 25 0 n/a 00:04:28.033 00:04:28.033 Elapsed time = 0.008 seconds 00:04:28.033 00:04:28.033 real 0m0.085s 00:04:28.033 user 0m0.043s 00:04:28.033 sys 0m0.042s 00:04:28.033 12:35:16 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.033 12:35:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:28.033 ************************************ 00:04:28.033 END TEST env_pci 00:04:28.033 ************************************ 00:04:28.291 12:35:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:28.291 12:35:16 env -- env/env.sh@15 -- # uname 00:04:28.291 12:35:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:28.291 12:35:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:28.291 12:35:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.291 12:35:16 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:28.291 12:35:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.291 12:35:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.291 ************************************ 00:04:28.291 START TEST env_dpdk_post_init 00:04:28.291 ************************************ 00:04:28.291 12:35:16 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.291 EAL: Detected CPU lcores: 10 00:04:28.291 EAL: Detected NUMA nodes: 1 00:04:28.291 EAL: Detected shared linkage of DPDK 00:04:28.291 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.291 EAL: Selected IOVA mode 'PA' 00:04:28.291 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.549 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:28.549 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:28.549 Starting DPDK initialization... 00:04:28.549 Starting SPDK post initialization... 00:04:28.549 SPDK NVMe probe 00:04:28.550 Attaching to 0000:00:10.0 00:04:28.550 Attaching to 0000:00:11.0 00:04:28.550 Attached to 0000:00:10.0 00:04:28.550 Attached to 0000:00:11.0 00:04:28.550 Cleaning up... 00:04:28.550 00:04:28.550 real 0m0.277s 00:04:28.550 user 0m0.088s 00:04:28.550 sys 0m0.089s 00:04:28.550 ************************************ 00:04:28.550 END TEST env_dpdk_post_init 00:04:28.550 ************************************ 00:04:28.550 12:35:17 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.550 12:35:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.550 12:35:17 env -- env/env.sh@26 -- # uname 00:04:28.550 12:35:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.550 12:35:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.550 12:35:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.550 12:35:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.550 12:35:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.550 ************************************ 00:04:28.550 START TEST env_mem_callbacks 00:04:28.550 ************************************ 00:04:28.550 12:35:17 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.550 EAL: Detected CPU lcores: 10 00:04:28.550 EAL: Detected NUMA nodes: 1 00:04:28.550 EAL: Detected shared linkage of DPDK 00:04:28.550 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.550 EAL: Selected IOVA mode 'PA' 00:04:28.808 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.808 00:04:28.808 00:04:28.808 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.808 http://cunit.sourceforge.net/ 00:04:28.808 00:04:28.808 00:04:28.808 Suite: memory 00:04:28.808 Test: test ... 00:04:28.808 register 0x200000200000 2097152 00:04:28.808 malloc 3145728 00:04:28.808 register 0x200000400000 4194304 00:04:28.808 buf 0x2000004fffc0 len 3145728 PASSED 00:04:28.808 malloc 64 00:04:28.808 buf 0x2000004ffec0 len 64 PASSED 00:04:28.808 malloc 4194304 00:04:28.808 register 0x200000800000 6291456 00:04:28.808 buf 0x2000009fffc0 len 4194304 PASSED 00:04:28.808 free 0x2000004fffc0 3145728 00:04:28.808 free 0x2000004ffec0 64 00:04:28.808 unregister 0x200000400000 4194304 PASSED 00:04:28.808 free 0x2000009fffc0 4194304 00:04:28.809 unregister 0x200000800000 6291456 PASSED 00:04:28.809 malloc 8388608 00:04:28.809 register 0x200000400000 10485760 00:04:28.809 buf 0x2000005fffc0 len 8388608 PASSED 00:04:28.809 free 0x2000005fffc0 8388608 00:04:28.809 unregister 0x200000400000 10485760 PASSED 00:04:28.809 passed 00:04:28.809 00:04:28.809 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.809 suites 1 1 n/a 0 0 00:04:28.809 tests 1 1 1 0 0 00:04:28.809 asserts 15 15 15 0 n/a 00:04:28.809 00:04:28.809 Elapsed time = 0.079 seconds 00:04:28.809 00:04:28.809 real 0m0.293s 00:04:28.809 user 0m0.111s 00:04:28.809 sys 0m0.077s 00:04:28.809 12:35:17 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.809 ************************************ 00:04:28.809 END TEST env_mem_callbacks 00:04:28.809 ************************************ 00:04:28.809 12:35:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.809 ************************************ 00:04:28.809 END TEST env 00:04:28.809 ************************************ 00:04:28.809 00:04:28.809 real 0m9.300s 00:04:28.809 user 0m7.440s 00:04:28.809 sys 0m1.463s 00:04:28.809 12:35:17 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.809 12:35:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.809 12:35:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:28.809 12:35:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.809 12:35:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.809 12:35:17 -- common/autotest_common.sh@10 -- # set +x 00:04:28.809 ************************************ 00:04:28.809 START TEST rpc 00:04:28.809 ************************************ 00:04:28.809 12:35:17 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.067 * Looking for test storage... 00:04:29.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.068 12:35:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.068 12:35:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.068 12:35:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.068 12:35:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.068 12:35:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.068 12:35:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.068 12:35:17 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.068 12:35:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.068 12:35:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.068 12:35:17 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.068 12:35:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.068 12:35:17 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.068 12:35:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.068 12:35:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.068 12:35:17 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.068 --rc genhtml_branch_coverage=1 00:04:29.068 --rc genhtml_function_coverage=1 00:04:29.068 --rc genhtml_legend=1 00:04:29.068 --rc geninfo_all_blocks=1 00:04:29.068 --rc geninfo_unexecuted_blocks=1 00:04:29.068 00:04:29.068 ' 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.068 --rc genhtml_branch_coverage=1 00:04:29.068 --rc genhtml_function_coverage=1 00:04:29.068 --rc genhtml_legend=1 00:04:29.068 --rc geninfo_all_blocks=1 00:04:29.068 --rc geninfo_unexecuted_blocks=1 00:04:29.068 00:04:29.068 ' 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.068 --rc genhtml_branch_coverage=1 00:04:29.068 --rc genhtml_function_coverage=1 00:04:29.068 --rc genhtml_legend=1 00:04:29.068 --rc geninfo_all_blocks=1 00:04:29.068 --rc geninfo_unexecuted_blocks=1 00:04:29.068 00:04:29.068 ' 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.068 --rc genhtml_branch_coverage=1 00:04:29.068 --rc genhtml_function_coverage=1 00:04:29.068 --rc genhtml_legend=1 00:04:29.068 --rc geninfo_all_blocks=1 00:04:29.068 --rc geninfo_unexecuted_blocks=1 00:04:29.068 00:04:29.068 ' 00:04:29.068 12:35:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56845 00:04:29.068 12:35:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:29.068 12:35:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.068 12:35:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56845 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@833 -- # '[' -z 56845 ']' 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.068 12:35:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.326 [2024-11-06 12:35:17.763750] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:04:29.326 [2024-11-06 12:35:17.764108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56845 ] 00:04:29.326 [2024-11-06 12:35:17.948183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.585 [2024-11-06 12:35:18.103163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:29.585 [2024-11-06 12:35:18.103545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56845' to capture a snapshot of events at runtime. 00:04:29.585 [2024-11-06 12:35:18.103581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:29.585 [2024-11-06 12:35:18.103601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:29.585 [2024-11-06 12:35:18.103616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56845 for offline analysis/debug. 00:04:29.585 [2024-11-06 12:35:18.105228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.521 12:35:18 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.521 12:35:18 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:30.521 12:35:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.521 12:35:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.521 12:35:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.521 12:35:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.521 12:35:18 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.521 12:35:18 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.521 12:35:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.521 ************************************ 00:04:30.521 START TEST rpc_integrity 00:04:30.521 ************************************ 00:04:30.521 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:30.521 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.521 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.521 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.521 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.521 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.521 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.521 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.521 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.521 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.521 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.522 { 00:04:30.522 "name": "Malloc0", 00:04:30.522 "aliases": [ 00:04:30.522 "da7843b2-e2fa-4c0f-af6a-0df2c37d0aad" 00:04:30.522 ], 00:04:30.522 "product_name": "Malloc disk", 00:04:30.522 "block_size": 512, 00:04:30.522 "num_blocks": 16384, 00:04:30.522 "uuid": "da7843b2-e2fa-4c0f-af6a-0df2c37d0aad", 00:04:30.522 "assigned_rate_limits": { 00:04:30.522 "rw_ios_per_sec": 0, 00:04:30.522 "rw_mbytes_per_sec": 0, 00:04:30.522 "r_mbytes_per_sec": 0, 00:04:30.522 "w_mbytes_per_sec": 0 00:04:30.522 }, 00:04:30.522 "claimed": false, 00:04:30.522 "zoned": false, 00:04:30.522 "supported_io_types": { 00:04:30.522 "read": true, 00:04:30.522 "write": true, 00:04:30.522 "unmap": true, 00:04:30.522 "flush": true, 00:04:30.522 "reset": true, 00:04:30.522 "nvme_admin": false, 00:04:30.522 "nvme_io": false, 00:04:30.522 "nvme_io_md": false, 00:04:30.522 "write_zeroes": true, 00:04:30.522 "zcopy": true, 00:04:30.522 "get_zone_info": false, 00:04:30.522 "zone_management": false, 00:04:30.522 "zone_append": false, 00:04:30.522 "compare": false, 00:04:30.522 "compare_and_write": false, 00:04:30.522 "abort": true, 00:04:30.522 "seek_hole": false, 00:04:30.522 "seek_data": false, 00:04:30.522 "copy": true, 00:04:30.522 "nvme_iov_md": false 00:04:30.522 }, 00:04:30.522 "memory_domains": [ 00:04:30.522 { 00:04:30.522 "dma_device_id": "system", 00:04:30.522 "dma_device_type": 1 00:04:30.522 }, 00:04:30.522 { 00:04:30.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.522 "dma_device_type": 2 00:04:30.522 } 00:04:30.522 ], 00:04:30.522 "driver_specific": {} 00:04:30.522 } 00:04:30.522 ]' 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.522 [2024-11-06 12:35:19.160940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.522 [2024-11-06 12:35:19.161220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.522 [2024-11-06 12:35:19.161287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:30.522 [2024-11-06 12:35:19.161322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.522 [2024-11-06 12:35:19.164365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.522 [2024-11-06 12:35:19.164422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.522 Passthru0 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.522 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.522 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.780 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.781 { 00:04:30.781 "name": "Malloc0", 00:04:30.781 "aliases": [ 00:04:30.781 "da7843b2-e2fa-4c0f-af6a-0df2c37d0aad" 00:04:30.781 ], 00:04:30.781 "product_name": "Malloc disk", 00:04:30.781 "block_size": 512, 00:04:30.781 "num_blocks": 16384, 00:04:30.781 "uuid": "da7843b2-e2fa-4c0f-af6a-0df2c37d0aad", 00:04:30.781 "assigned_rate_limits": { 00:04:30.781 "rw_ios_per_sec": 0, 00:04:30.781 "rw_mbytes_per_sec": 0, 00:04:30.781 "r_mbytes_per_sec": 0, 00:04:30.781 "w_mbytes_per_sec": 0 00:04:30.781 }, 00:04:30.781 "claimed": true, 00:04:30.781 "claim_type": "exclusive_write", 00:04:30.781 "zoned": false, 00:04:30.781 "supported_io_types": { 00:04:30.781 "read": true, 00:04:30.781 "write": true, 00:04:30.781 "unmap": true, 00:04:30.781 "flush": true, 00:04:30.781 "reset": true, 00:04:30.781 "nvme_admin": false, 00:04:30.781 "nvme_io": false, 00:04:30.781 "nvme_io_md": false, 00:04:30.781 "write_zeroes": true, 00:04:30.781 "zcopy": true, 00:04:30.781 "get_zone_info": false, 00:04:30.781 "zone_management": false, 00:04:30.781 "zone_append": false, 00:04:30.781 "compare": false, 00:04:30.781 "compare_and_write": false, 00:04:30.781 "abort": true, 00:04:30.781 "seek_hole": false, 00:04:30.781 "seek_data": false, 00:04:30.781 "copy": true, 00:04:30.781 "nvme_iov_md": false 00:04:30.781 }, 00:04:30.781 "memory_domains": [ 00:04:30.781 { 00:04:30.781 "dma_device_id": "system", 00:04:30.781 "dma_device_type": 1 00:04:30.781 }, 00:04:30.781 { 00:04:30.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.781 "dma_device_type": 2 00:04:30.781 } 00:04:30.781 ], 00:04:30.781 "driver_specific": {} 00:04:30.781 }, 00:04:30.781 { 00:04:30.781 "name": "Passthru0", 00:04:30.781 "aliases": [ 00:04:30.781 "48f8183f-608c-5ed3-8e72-9f2869dc8c63" 00:04:30.781 ], 00:04:30.781 "product_name": "passthru", 00:04:30.781 "block_size": 512, 00:04:30.781 "num_blocks": 16384, 00:04:30.781 "uuid": "48f8183f-608c-5ed3-8e72-9f2869dc8c63", 00:04:30.781 "assigned_rate_limits": { 00:04:30.781 "rw_ios_per_sec": 0, 00:04:30.781 "rw_mbytes_per_sec": 0, 00:04:30.781 "r_mbytes_per_sec": 0, 00:04:30.781 "w_mbytes_per_sec": 0 00:04:30.781 }, 00:04:30.781 "claimed": false, 00:04:30.781 "zoned": false, 00:04:30.781 "supported_io_types": { 00:04:30.781 "read": true, 00:04:30.781 "write": true, 00:04:30.781 "unmap": true, 00:04:30.781 "flush": true, 00:04:30.781 "reset": true, 00:04:30.781 "nvme_admin": false, 00:04:30.781 "nvme_io": false, 00:04:30.781 "nvme_io_md": false, 00:04:30.781 "write_zeroes": true, 00:04:30.781 "zcopy": true, 00:04:30.781 "get_zone_info": false, 00:04:30.781 "zone_management": false, 00:04:30.781 "zone_append": false, 00:04:30.781 "compare": false, 00:04:30.781 "compare_and_write": false, 00:04:30.781 "abort": true, 00:04:30.781 "seek_hole": false, 00:04:30.781 "seek_data": false, 00:04:30.781 "copy": true, 00:04:30.781 "nvme_iov_md": false 00:04:30.781 }, 00:04:30.781 "memory_domains": [ 00:04:30.781 { 00:04:30.781 "dma_device_id": "system", 00:04:30.781 "dma_device_type": 1 00:04:30.781 }, 00:04:30.781 { 00:04:30.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.781 "dma_device_type": 2 00:04:30.781 } 00:04:30.781 ], 00:04:30.781 "driver_specific": { 00:04:30.781 "passthru": { 00:04:30.781 "name": "Passthru0", 00:04:30.781 "base_bdev_name": "Malloc0" 00:04:30.781 } 00:04:30.781 } 00:04:30.781 } 00:04:30.781 ]' 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.781 ************************************ 00:04:30.781 END TEST rpc_integrity 00:04:30.781 ************************************ 00:04:30.781 12:35:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.781 00:04:30.781 real 0m0.343s 00:04:30.781 user 0m0.203s 00:04:30.781 sys 0m0.044s 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.781 12:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 12:35:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.781 12:35:19 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.781 12:35:19 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.781 12:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 ************************************ 00:04:30.781 START TEST rpc_plugins 00:04:30.781 ************************************ 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:30.781 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.781 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.781 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.781 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.781 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.781 { 00:04:30.781 "name": "Malloc1", 00:04:30.781 "aliases": [ 00:04:30.781 "92c11629-f48c-484e-8e49-73bd29c20f06" 00:04:30.781 ], 00:04:30.781 "product_name": "Malloc disk", 00:04:30.781 "block_size": 4096, 00:04:30.781 "num_blocks": 256, 00:04:30.781 "uuid": "92c11629-f48c-484e-8e49-73bd29c20f06", 00:04:30.781 "assigned_rate_limits": { 00:04:30.781 "rw_ios_per_sec": 0, 00:04:30.781 "rw_mbytes_per_sec": 0, 00:04:30.781 "r_mbytes_per_sec": 0, 00:04:30.781 "w_mbytes_per_sec": 0 00:04:30.781 }, 00:04:30.781 "claimed": false, 00:04:30.781 "zoned": false, 00:04:30.781 "supported_io_types": { 00:04:30.781 "read": true, 00:04:30.781 "write": true, 00:04:30.781 "unmap": true, 00:04:30.781 "flush": true, 00:04:30.781 "reset": true, 00:04:30.781 "nvme_admin": false, 00:04:30.781 "nvme_io": false, 00:04:30.781 "nvme_io_md": false, 00:04:30.781 "write_zeroes": true, 00:04:30.781 "zcopy": true, 00:04:30.781 "get_zone_info": false, 00:04:30.781 "zone_management": false, 00:04:30.781 "zone_append": false, 00:04:30.781 "compare": false, 00:04:30.781 "compare_and_write": false, 00:04:30.781 "abort": true, 00:04:30.781 "seek_hole": false, 00:04:30.781 "seek_data": false, 00:04:30.781 "copy": true, 00:04:30.781 "nvme_iov_md": false 00:04:30.781 }, 00:04:30.781 "memory_domains": [ 00:04:30.781 { 00:04:30.781 "dma_device_id": "system", 00:04:30.781 "dma_device_type": 1 00:04:30.781 }, 00:04:30.781 { 00:04:30.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.781 "dma_device_type": 2 00:04:30.781 } 00:04:30.781 ], 00:04:30.781 "driver_specific": {} 00:04:30.781 } 00:04:30.781 ]' 00:04:30.781 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:31.040 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:31.040 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.040 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.040 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:31.040 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:31.040 ************************************ 00:04:31.040 END TEST rpc_plugins 00:04:31.040 ************************************ 00:04:31.040 12:35:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:31.040 00:04:31.040 real 0m0.160s 00:04:31.040 user 0m0.094s 00:04:31.040 sys 0m0.025s 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.040 12:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:31.040 12:35:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:31.040 12:35:19 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.040 12:35:19 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.040 12:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.040 ************************************ 00:04:31.040 START TEST rpc_trace_cmd_test 00:04:31.040 ************************************ 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.040 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:31.040 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56845", 00:04:31.041 "tpoint_group_mask": "0x8", 00:04:31.041 "iscsi_conn": { 00:04:31.041 "mask": "0x2", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "scsi": { 00:04:31.041 "mask": "0x4", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "bdev": { 00:04:31.041 "mask": "0x8", 00:04:31.041 "tpoint_mask": "0xffffffffffffffff" 00:04:31.041 }, 00:04:31.041 "nvmf_rdma": { 00:04:31.041 "mask": "0x10", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "nvmf_tcp": { 00:04:31.041 "mask": "0x20", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "ftl": { 00:04:31.041 "mask": "0x40", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "blobfs": { 00:04:31.041 "mask": "0x80", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "dsa": { 00:04:31.041 "mask": "0x200", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "thread": { 00:04:31.041 "mask": "0x400", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "nvme_pcie": { 00:04:31.041 "mask": "0x800", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "iaa": { 00:04:31.041 "mask": "0x1000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "nvme_tcp": { 00:04:31.041 "mask": "0x2000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "bdev_nvme": { 00:04:31.041 "mask": "0x4000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "sock": { 00:04:31.041 "mask": "0x8000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "blob": { 00:04:31.041 "mask": "0x10000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "bdev_raid": { 00:04:31.041 "mask": "0x20000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 }, 00:04:31.041 "scheduler": { 00:04:31.041 "mask": "0x40000", 00:04:31.041 "tpoint_mask": "0x0" 00:04:31.041 } 00:04:31.041 }' 00:04:31.041 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:31.041 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:31.041 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:31.300 ************************************ 00:04:31.300 END TEST rpc_trace_cmd_test 00:04:31.300 ************************************ 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:31.300 00:04:31.300 real 0m0.277s 00:04:31.300 user 0m0.228s 00:04:31.300 sys 0m0.035s 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.300 12:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 12:35:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:31.300 12:35:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:31.300 12:35:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:31.300 12:35:19 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.300 12:35:19 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.300 12:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 ************************************ 00:04:31.300 START TEST rpc_daemon_integrity 00:04:31.300 ************************************ 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.300 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.559 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.559 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.559 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.560 12:35:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.560 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.560 { 00:04:31.560 "name": "Malloc2", 00:04:31.560 "aliases": [ 00:04:31.560 "bf0f15ad-94a9-413b-af99-4870a67881da" 00:04:31.560 ], 00:04:31.560 "product_name": "Malloc disk", 00:04:31.560 "block_size": 512, 00:04:31.560 "num_blocks": 16384, 00:04:31.560 "uuid": "bf0f15ad-94a9-413b-af99-4870a67881da", 00:04:31.560 "assigned_rate_limits": { 00:04:31.560 "rw_ios_per_sec": 0, 00:04:31.560 "rw_mbytes_per_sec": 0, 00:04:31.560 "r_mbytes_per_sec": 0, 00:04:31.560 "w_mbytes_per_sec": 0 00:04:31.560 }, 00:04:31.560 "claimed": false, 00:04:31.560 "zoned": false, 00:04:31.560 "supported_io_types": { 00:04:31.560 "read": true, 00:04:31.560 "write": true, 00:04:31.560 "unmap": true, 00:04:31.560 "flush": true, 00:04:31.560 "reset": true, 00:04:31.560 "nvme_admin": false, 00:04:31.560 "nvme_io": false, 00:04:31.560 "nvme_io_md": false, 00:04:31.560 "write_zeroes": true, 00:04:31.560 "zcopy": true, 00:04:31.560 "get_zone_info": false, 00:04:31.560 "zone_management": false, 00:04:31.560 "zone_append": false, 00:04:31.560 "compare": false, 00:04:31.560 "compare_and_write": false, 00:04:31.560 "abort": true, 00:04:31.560 "seek_hole": false, 00:04:31.560 "seek_data": false, 00:04:31.560 "copy": true, 00:04:31.560 "nvme_iov_md": false 00:04:31.560 }, 00:04:31.560 "memory_domains": [ 00:04:31.560 { 00:04:31.560 "dma_device_id": "system", 00:04:31.560 "dma_device_type": 1 00:04:31.560 }, 00:04:31.560 { 00:04:31.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.560 "dma_device_type": 2 00:04:31.560 } 00:04:31.560 ], 00:04:31.560 "driver_specific": {} 00:04:31.560 } 00:04:31.560 ]' 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 [2024-11-06 12:35:20.067532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.560 [2024-11-06 12:35:20.067769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.560 [2024-11-06 12:35:20.067815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:31.560 [2024-11-06 12:35:20.067852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.560 [2024-11-06 12:35:20.070895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.560 [2024-11-06 12:35:20.071069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.560 Passthru0 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.560 { 00:04:31.560 "name": "Malloc2", 00:04:31.560 "aliases": [ 00:04:31.560 "bf0f15ad-94a9-413b-af99-4870a67881da" 00:04:31.560 ], 00:04:31.560 "product_name": "Malloc disk", 00:04:31.560 "block_size": 512, 00:04:31.560 "num_blocks": 16384, 00:04:31.560 "uuid": "bf0f15ad-94a9-413b-af99-4870a67881da", 00:04:31.560 "assigned_rate_limits": { 00:04:31.560 "rw_ios_per_sec": 0, 00:04:31.560 "rw_mbytes_per_sec": 0, 00:04:31.560 "r_mbytes_per_sec": 0, 00:04:31.560 "w_mbytes_per_sec": 0 00:04:31.560 }, 00:04:31.560 "claimed": true, 00:04:31.560 "claim_type": "exclusive_write", 00:04:31.560 "zoned": false, 00:04:31.560 "supported_io_types": { 00:04:31.560 "read": true, 00:04:31.560 "write": true, 00:04:31.560 "unmap": true, 00:04:31.560 "flush": true, 00:04:31.560 "reset": true, 00:04:31.560 "nvme_admin": false, 00:04:31.560 "nvme_io": false, 00:04:31.560 "nvme_io_md": false, 00:04:31.560 "write_zeroes": true, 00:04:31.560 "zcopy": true, 00:04:31.560 "get_zone_info": false, 00:04:31.560 "zone_management": false, 00:04:31.560 "zone_append": false, 00:04:31.560 "compare": false, 00:04:31.560 "compare_and_write": false, 00:04:31.560 "abort": true, 00:04:31.560 "seek_hole": false, 00:04:31.560 "seek_data": false, 00:04:31.560 "copy": true, 00:04:31.560 "nvme_iov_md": false 00:04:31.560 }, 00:04:31.560 "memory_domains": [ 00:04:31.560 { 00:04:31.560 "dma_device_id": "system", 00:04:31.560 "dma_device_type": 1 00:04:31.560 }, 00:04:31.560 { 00:04:31.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.560 "dma_device_type": 2 00:04:31.560 } 00:04:31.560 ], 00:04:31.560 "driver_specific": {} 00:04:31.560 }, 00:04:31.560 { 00:04:31.560 "name": "Passthru0", 00:04:31.560 "aliases": [ 00:04:31.560 "f9da254a-e19c-5227-9ba1-57c0cccad5c2" 00:04:31.560 ], 00:04:31.560 "product_name": "passthru", 00:04:31.560 "block_size": 512, 00:04:31.560 "num_blocks": 16384, 00:04:31.560 "uuid": "f9da254a-e19c-5227-9ba1-57c0cccad5c2", 00:04:31.560 "assigned_rate_limits": { 00:04:31.560 "rw_ios_per_sec": 0, 00:04:31.560 "rw_mbytes_per_sec": 0, 00:04:31.560 "r_mbytes_per_sec": 0, 00:04:31.560 "w_mbytes_per_sec": 0 00:04:31.560 }, 00:04:31.560 "claimed": false, 00:04:31.560 "zoned": false, 00:04:31.560 "supported_io_types": { 00:04:31.560 "read": true, 00:04:31.560 "write": true, 00:04:31.560 "unmap": true, 00:04:31.560 "flush": true, 00:04:31.560 "reset": true, 00:04:31.560 "nvme_admin": false, 00:04:31.560 "nvme_io": false, 00:04:31.560 "nvme_io_md": false, 00:04:31.560 "write_zeroes": true, 00:04:31.560 "zcopy": true, 00:04:31.560 "get_zone_info": false, 00:04:31.560 "zone_management": false, 00:04:31.560 "zone_append": false, 00:04:31.560 "compare": false, 00:04:31.560 "compare_and_write": false, 00:04:31.560 "abort": true, 00:04:31.560 "seek_hole": false, 00:04:31.560 "seek_data": false, 00:04:31.560 "copy": true, 00:04:31.560 "nvme_iov_md": false 00:04:31.560 }, 00:04:31.560 "memory_domains": [ 00:04:31.560 { 00:04:31.560 "dma_device_id": "system", 00:04:31.560 "dma_device_type": 1 00:04:31.560 }, 00:04:31.560 { 00:04:31.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.560 "dma_device_type": 2 00:04:31.560 } 00:04:31.560 ], 00:04:31.560 "driver_specific": { 00:04:31.560 "passthru": { 00:04:31.560 "name": "Passthru0", 00:04:31.560 "base_bdev_name": "Malloc2" 00:04:31.560 } 00:04:31.560 } 00:04:31.560 } 00:04:31.560 ]' 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.560 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.819 ************************************ 00:04:31.819 END TEST rpc_daemon_integrity 00:04:31.819 ************************************ 00:04:31.819 12:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.819 00:04:31.819 real 0m0.327s 00:04:31.819 user 0m0.198s 00:04:31.819 sys 0m0.037s 00:04:31.819 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.819 12:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.819 12:35:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.819 12:35:20 rpc -- rpc/rpc.sh@84 -- # killprocess 56845 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@952 -- # '[' -z 56845 ']' 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@956 -- # kill -0 56845 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@957 -- # uname 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56845 00:04:31.819 killing process with pid 56845 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56845' 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@971 -- # kill 56845 00:04:31.819 12:35:20 rpc -- common/autotest_common.sh@976 -- # wait 56845 00:04:34.350 ************************************ 00:04:34.350 END TEST rpc 00:04:34.350 ************************************ 00:04:34.350 00:04:34.350 real 0m5.131s 00:04:34.350 user 0m5.761s 00:04:34.350 sys 0m0.918s 00:04:34.350 12:35:22 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.350 12:35:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.350 12:35:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:34.350 12:35:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.350 12:35:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.350 12:35:22 -- common/autotest_common.sh@10 -- # set +x 00:04:34.350 ************************************ 00:04:34.350 START TEST skip_rpc 00:04:34.350 ************************************ 00:04:34.350 12:35:22 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:34.350 * Looking for test storage... 00:04:34.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.350 12:35:22 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.350 12:35:22 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.350 12:35:22 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.350 12:35:22 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.350 12:35:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.351 12:35:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.351 --rc genhtml_branch_coverage=1 00:04:34.351 --rc genhtml_function_coverage=1 00:04:34.351 --rc genhtml_legend=1 00:04:34.351 --rc geninfo_all_blocks=1 00:04:34.351 --rc geninfo_unexecuted_blocks=1 00:04:34.351 00:04:34.351 ' 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.351 --rc genhtml_branch_coverage=1 00:04:34.351 --rc genhtml_function_coverage=1 00:04:34.351 --rc genhtml_legend=1 00:04:34.351 --rc geninfo_all_blocks=1 00:04:34.351 --rc geninfo_unexecuted_blocks=1 00:04:34.351 00:04:34.351 ' 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.351 --rc genhtml_branch_coverage=1 00:04:34.351 --rc genhtml_function_coverage=1 00:04:34.351 --rc genhtml_legend=1 00:04:34.351 --rc geninfo_all_blocks=1 00:04:34.351 --rc geninfo_unexecuted_blocks=1 00:04:34.351 00:04:34.351 ' 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.351 --rc genhtml_branch_coverage=1 00:04:34.351 --rc genhtml_function_coverage=1 00:04:34.351 --rc genhtml_legend=1 00:04:34.351 --rc geninfo_all_blocks=1 00:04:34.351 --rc geninfo_unexecuted_blocks=1 00:04:34.351 00:04:34.351 ' 00:04:34.351 12:35:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.351 12:35:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.351 12:35:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.351 12:35:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.351 ************************************ 00:04:34.351 START TEST skip_rpc 00:04:34.351 ************************************ 00:04:34.351 12:35:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:34.351 12:35:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57080 00:04:34.351 12:35:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.351 12:35:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.351 12:35:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.351 [2024-11-06 12:35:22.973261] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:04:34.351 [2024-11-06 12:35:22.973710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57080 ] 00:04:34.610 [2024-11-06 12:35:23.167259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.868 [2024-11-06 12:35:23.323403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57080 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57080 ']' 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57080 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57080 00:04:40.144 killing process with pid 57080 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57080' 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57080 00:04:40.144 12:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57080 00:04:42.055 ************************************ 00:04:42.055 END TEST skip_rpc 00:04:42.055 ************************************ 00:04:42.055 00:04:42.055 real 0m7.471s 00:04:42.055 user 0m6.919s 00:04:42.055 sys 0m0.443s 00:04:42.055 12:35:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.055 12:35:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.055 12:35:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.055 12:35:30 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.055 12:35:30 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.055 12:35:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.055 ************************************ 00:04:42.055 START TEST skip_rpc_with_json 00:04:42.055 ************************************ 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57184 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57184 00:04:42.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.055 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57184 ']' 00:04:42.056 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.056 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.056 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.056 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.056 12:35:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.056 [2024-11-06 12:35:30.476070] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:04:42.056 [2024-11-06 12:35:30.476549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57184 ] 00:04:42.056 [2024-11-06 12:35:30.655655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.315 [2024-11-06 12:35:30.805303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.252 [2024-11-06 12:35:31.762429] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.252 request: 00:04:43.252 { 00:04:43.252 "trtype": "tcp", 00:04:43.252 "method": "nvmf_get_transports", 00:04:43.252 "req_id": 1 00:04:43.252 } 00:04:43.252 Got JSON-RPC error response 00:04:43.252 response: 00:04:43.252 { 00:04:43.252 "code": -19, 00:04:43.252 "message": "No such device" 00:04:43.252 } 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.252 [2024-11-06 12:35:31.770589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.252 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.511 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.511 12:35:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.511 { 00:04:43.511 "subsystems": [ 00:04:43.511 { 00:04:43.511 "subsystem": "fsdev", 00:04:43.511 "config": [ 00:04:43.511 { 00:04:43.511 "method": "fsdev_set_opts", 00:04:43.511 "params": { 00:04:43.511 "fsdev_io_pool_size": 65535, 00:04:43.511 "fsdev_io_cache_size": 256 00:04:43.511 } 00:04:43.511 } 00:04:43.511 ] 00:04:43.511 }, 00:04:43.511 { 00:04:43.511 "subsystem": "keyring", 00:04:43.511 "config": [] 00:04:43.511 }, 00:04:43.511 { 00:04:43.511 "subsystem": "iobuf", 00:04:43.511 "config": [ 00:04:43.511 { 00:04:43.511 "method": "iobuf_set_options", 00:04:43.511 "params": { 00:04:43.511 "small_pool_count": 8192, 00:04:43.511 "large_pool_count": 1024, 00:04:43.511 "small_bufsize": 8192, 00:04:43.511 "large_bufsize": 135168, 00:04:43.511 "enable_numa": false 00:04:43.511 } 00:04:43.511 } 00:04:43.511 ] 00:04:43.511 }, 00:04:43.511 { 00:04:43.511 "subsystem": "sock", 00:04:43.511 "config": [ 00:04:43.511 { 00:04:43.511 "method": "sock_set_default_impl", 00:04:43.511 "params": { 00:04:43.511 "impl_name": "posix" 00:04:43.511 } 00:04:43.511 }, 00:04:43.511 { 00:04:43.511 "method": "sock_impl_set_options", 00:04:43.511 "params": { 00:04:43.511 "impl_name": "ssl", 00:04:43.511 "recv_buf_size": 4096, 00:04:43.511 "send_buf_size": 4096, 00:04:43.512 "enable_recv_pipe": true, 00:04:43.512 "enable_quickack": false, 00:04:43.512 "enable_placement_id": 0, 00:04:43.512 "enable_zerocopy_send_server": true, 00:04:43.512 "enable_zerocopy_send_client": false, 00:04:43.512 "zerocopy_threshold": 0, 00:04:43.512 "tls_version": 0, 00:04:43.512 "enable_ktls": false 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "sock_impl_set_options", 00:04:43.512 "params": { 00:04:43.512 "impl_name": "posix", 00:04:43.512 "recv_buf_size": 2097152, 00:04:43.512 "send_buf_size": 2097152, 00:04:43.512 "enable_recv_pipe": true, 00:04:43.512 "enable_quickack": false, 00:04:43.512 "enable_placement_id": 0, 00:04:43.512 "enable_zerocopy_send_server": true, 00:04:43.512 "enable_zerocopy_send_client": false, 00:04:43.512 "zerocopy_threshold": 0, 00:04:43.512 "tls_version": 0, 00:04:43.512 "enable_ktls": false 00:04:43.512 } 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "vmd", 00:04:43.512 "config": [] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "accel", 00:04:43.512 "config": [ 00:04:43.512 { 00:04:43.512 "method": "accel_set_options", 00:04:43.512 "params": { 00:04:43.512 "small_cache_size": 128, 00:04:43.512 "large_cache_size": 16, 00:04:43.512 "task_count": 2048, 00:04:43.512 "sequence_count": 2048, 00:04:43.512 "buf_count": 2048 00:04:43.512 } 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "bdev", 00:04:43.512 "config": [ 00:04:43.512 { 00:04:43.512 "method": "bdev_set_options", 00:04:43.512 "params": { 00:04:43.512 "bdev_io_pool_size": 65535, 00:04:43.512 "bdev_io_cache_size": 256, 00:04:43.512 "bdev_auto_examine": true, 00:04:43.512 "iobuf_small_cache_size": 128, 00:04:43.512 "iobuf_large_cache_size": 16 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "bdev_raid_set_options", 00:04:43.512 "params": { 00:04:43.512 "process_window_size_kb": 1024, 00:04:43.512 "process_max_bandwidth_mb_sec": 0 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "bdev_iscsi_set_options", 00:04:43.512 "params": { 00:04:43.512 "timeout_sec": 30 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "bdev_nvme_set_options", 00:04:43.512 "params": { 00:04:43.512 "action_on_timeout": "none", 00:04:43.512 "timeout_us": 0, 00:04:43.512 "timeout_admin_us": 0, 00:04:43.512 "keep_alive_timeout_ms": 10000, 00:04:43.512 "arbitration_burst": 0, 00:04:43.512 "low_priority_weight": 0, 00:04:43.512 "medium_priority_weight": 0, 00:04:43.512 "high_priority_weight": 0, 00:04:43.512 "nvme_adminq_poll_period_us": 10000, 00:04:43.512 "nvme_ioq_poll_period_us": 0, 00:04:43.512 "io_queue_requests": 0, 00:04:43.512 "delay_cmd_submit": true, 00:04:43.512 "transport_retry_count": 4, 00:04:43.512 "bdev_retry_count": 3, 00:04:43.512 "transport_ack_timeout": 0, 00:04:43.512 "ctrlr_loss_timeout_sec": 0, 00:04:43.512 "reconnect_delay_sec": 0, 00:04:43.512 "fast_io_fail_timeout_sec": 0, 00:04:43.512 "disable_auto_failback": false, 00:04:43.512 "generate_uuids": false, 00:04:43.512 "transport_tos": 0, 00:04:43.512 "nvme_error_stat": false, 00:04:43.512 "rdma_srq_size": 0, 00:04:43.512 "io_path_stat": false, 00:04:43.512 "allow_accel_sequence": false, 00:04:43.512 "rdma_max_cq_size": 0, 00:04:43.512 "rdma_cm_event_timeout_ms": 0, 00:04:43.512 "dhchap_digests": [ 00:04:43.512 "sha256", 00:04:43.512 "sha384", 00:04:43.512 "sha512" 00:04:43.512 ], 00:04:43.512 "dhchap_dhgroups": [ 00:04:43.512 "null", 00:04:43.512 "ffdhe2048", 00:04:43.512 "ffdhe3072", 00:04:43.512 "ffdhe4096", 00:04:43.512 "ffdhe6144", 00:04:43.512 "ffdhe8192" 00:04:43.512 ] 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "bdev_nvme_set_hotplug", 00:04:43.512 "params": { 00:04:43.512 "period_us": 100000, 00:04:43.512 "enable": false 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "bdev_wait_for_examine" 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "scsi", 00:04:43.512 "config": null 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "scheduler", 00:04:43.512 "config": [ 00:04:43.512 { 00:04:43.512 "method": "framework_set_scheduler", 00:04:43.512 "params": { 00:04:43.512 "name": "static" 00:04:43.512 } 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "vhost_scsi", 00:04:43.512 "config": [] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "vhost_blk", 00:04:43.512 "config": [] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "ublk", 00:04:43.512 "config": [] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "nbd", 00:04:43.512 "config": [] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "nvmf", 00:04:43.512 "config": [ 00:04:43.512 { 00:04:43.512 "method": "nvmf_set_config", 00:04:43.512 "params": { 00:04:43.512 "discovery_filter": "match_any", 00:04:43.512 "admin_cmd_passthru": { 00:04:43.512 "identify_ctrlr": false 00:04:43.512 }, 00:04:43.512 "dhchap_digests": [ 00:04:43.512 "sha256", 00:04:43.512 "sha384", 00:04:43.512 "sha512" 00:04:43.512 ], 00:04:43.512 "dhchap_dhgroups": [ 00:04:43.512 "null", 00:04:43.512 "ffdhe2048", 00:04:43.512 "ffdhe3072", 00:04:43.512 "ffdhe4096", 00:04:43.512 "ffdhe6144", 00:04:43.512 "ffdhe8192" 00:04:43.512 ] 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "nvmf_set_max_subsystems", 00:04:43.512 "params": { 00:04:43.512 "max_subsystems": 1024 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "nvmf_set_crdt", 00:04:43.512 "params": { 00:04:43.512 "crdt1": 0, 00:04:43.512 "crdt2": 0, 00:04:43.512 "crdt3": 0 00:04:43.512 } 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "method": "nvmf_create_transport", 00:04:43.512 "params": { 00:04:43.512 "trtype": "TCP", 00:04:43.512 "max_queue_depth": 128, 00:04:43.512 "max_io_qpairs_per_ctrlr": 127, 00:04:43.512 "in_capsule_data_size": 4096, 00:04:43.512 "max_io_size": 131072, 00:04:43.512 "io_unit_size": 131072, 00:04:43.512 "max_aq_depth": 128, 00:04:43.512 "num_shared_buffers": 511, 00:04:43.512 "buf_cache_size": 4294967295, 00:04:43.512 "dif_insert_or_strip": false, 00:04:43.512 "zcopy": false, 00:04:43.512 "c2h_success": true, 00:04:43.512 "sock_priority": 0, 00:04:43.512 "abort_timeout_sec": 1, 00:04:43.512 "ack_timeout": 0, 00:04:43.512 "data_wr_pool_size": 0 00:04:43.512 } 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 }, 00:04:43.512 { 00:04:43.512 "subsystem": "iscsi", 00:04:43.512 "config": [ 00:04:43.512 { 00:04:43.512 "method": "iscsi_set_options", 00:04:43.512 "params": { 00:04:43.512 "node_base": "iqn.2016-06.io.spdk", 00:04:43.512 "max_sessions": 128, 00:04:43.512 "max_connections_per_session": 2, 00:04:43.512 "max_queue_depth": 64, 00:04:43.512 "default_time2wait": 2, 00:04:43.512 "default_time2retain": 20, 00:04:43.512 "first_burst_length": 8192, 00:04:43.512 "immediate_data": true, 00:04:43.512 "allow_duplicated_isid": false, 00:04:43.512 "error_recovery_level": 0, 00:04:43.512 "nop_timeout": 60, 00:04:43.512 "nop_in_interval": 30, 00:04:43.512 "disable_chap": false, 00:04:43.512 "require_chap": false, 00:04:43.512 "mutual_chap": false, 00:04:43.512 "chap_group": 0, 00:04:43.512 "max_large_datain_per_connection": 64, 00:04:43.512 "max_r2t_per_connection": 4, 00:04:43.512 "pdu_pool_size": 36864, 00:04:43.512 "immediate_data_pool_size": 16384, 00:04:43.512 "data_out_pool_size": 2048 00:04:43.512 } 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 } 00:04:43.512 ] 00:04:43.512 } 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57184 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57184 ']' 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57184 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57184 00:04:43.512 killing process with pid 57184 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57184' 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57184 00:04:43.512 12:35:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57184 00:04:46.045 12:35:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57240 00:04:46.045 12:35:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.045 12:35:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57240 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57240 ']' 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57240 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57240 00:04:51.348 killing process with pid 57240 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57240' 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57240 00:04:51.348 12:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57240 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.252 00:04:53.252 real 0m11.211s 00:04:53.252 user 0m10.558s 00:04:53.252 sys 0m1.101s 00:04:53.252 ************************************ 00:04:53.252 END TEST skip_rpc_with_json 00:04:53.252 ************************************ 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.252 12:35:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.252 12:35:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.252 12:35:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.252 12:35:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.252 ************************************ 00:04:53.252 START TEST skip_rpc_with_delay 00:04:53.252 ************************************ 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.252 [2024-11-06 12:35:41.749005] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.252 00:04:53.252 real 0m0.193s 00:04:53.252 user 0m0.099s 00:04:53.252 sys 0m0.091s 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.252 ************************************ 00:04:53.252 END TEST skip_rpc_with_delay 00:04:53.252 ************************************ 00:04:53.252 12:35:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.252 12:35:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.252 12:35:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.252 12:35:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.252 12:35:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.252 12:35:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.252 12:35:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.252 ************************************ 00:04:53.252 START TEST exit_on_failed_rpc_init 00:04:53.252 ************************************ 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57368 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57368 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57368 ']' 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.252 12:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.511 [2024-11-06 12:35:41.993941] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:04:53.511 [2024-11-06 12:35:41.994363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57368 ] 00:04:53.770 [2024-11-06 12:35:42.200112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.770 [2024-11-06 12:35:42.328749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.707 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.707 [2024-11-06 12:35:43.291878] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:04:54.707 [2024-11-06 12:35:43.292288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57386 ] 00:04:54.966 [2024-11-06 12:35:43.469262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.224 [2024-11-06 12:35:43.644837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.224 [2024-11-06 12:35:43.645001] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.224 [2024-11-06 12:35:43.645032] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.224 [2024-11-06 12:35:43.645061] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57368 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57368 ']' 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57368 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57368 00:04:55.482 killing process with pid 57368 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57368' 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57368 00:04:55.482 12:35:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57368 00:04:58.068 ************************************ 00:04:58.068 END TEST exit_on_failed_rpc_init 00:04:58.068 ************************************ 00:04:58.068 00:04:58.068 real 0m4.332s 00:04:58.068 user 0m4.863s 00:04:58.068 sys 0m0.674s 00:04:58.068 12:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.068 12:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.068 12:35:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.068 ************************************ 00:04:58.068 END TEST skip_rpc 00:04:58.068 ************************************ 00:04:58.068 00:04:58.068 real 0m23.624s 00:04:58.068 user 0m22.634s 00:04:58.068 sys 0m2.525s 00:04:58.068 12:35:46 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.068 12:35:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.068 12:35:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:58.068 12:35:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.068 12:35:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.068 12:35:46 -- common/autotest_common.sh@10 -- # set +x 00:04:58.068 ************************************ 00:04:58.068 START TEST rpc_client 00:04:58.068 ************************************ 00:04:58.068 12:35:46 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:58.068 * Looking for test storage... 00:04:58.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:58.068 12:35:46 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.068 12:35:46 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.068 12:35:46 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.068 12:35:46 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.068 12:35:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.069 12:35:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.069 --rc genhtml_branch_coverage=1 00:04:58.069 --rc genhtml_function_coverage=1 00:04:58.069 --rc genhtml_legend=1 00:04:58.069 --rc geninfo_all_blocks=1 00:04:58.069 --rc geninfo_unexecuted_blocks=1 00:04:58.069 00:04:58.069 ' 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.069 --rc genhtml_branch_coverage=1 00:04:58.069 --rc genhtml_function_coverage=1 00:04:58.069 --rc genhtml_legend=1 00:04:58.069 --rc geninfo_all_blocks=1 00:04:58.069 --rc geninfo_unexecuted_blocks=1 00:04:58.069 00:04:58.069 ' 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.069 --rc genhtml_branch_coverage=1 00:04:58.069 --rc genhtml_function_coverage=1 00:04:58.069 --rc genhtml_legend=1 00:04:58.069 --rc geninfo_all_blocks=1 00:04:58.069 --rc geninfo_unexecuted_blocks=1 00:04:58.069 00:04:58.069 ' 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.069 --rc genhtml_branch_coverage=1 00:04:58.069 --rc genhtml_function_coverage=1 00:04:58.069 --rc genhtml_legend=1 00:04:58.069 --rc geninfo_all_blocks=1 00:04:58.069 --rc geninfo_unexecuted_blocks=1 00:04:58.069 00:04:58.069 ' 00:04:58.069 12:35:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:58.069 OK 00:04:58.069 12:35:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.069 00:04:58.069 real 0m0.268s 00:04:58.069 user 0m0.155s 00:04:58.069 sys 0m0.117s 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.069 12:35:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.069 ************************************ 00:04:58.069 END TEST rpc_client 00:04:58.069 ************************************ 00:04:58.069 12:35:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:58.069 12:35:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.069 12:35:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.069 12:35:46 -- common/autotest_common.sh@10 -- # set +x 00:04:58.069 ************************************ 00:04:58.069 START TEST json_config 00:04:58.069 ************************************ 00:04:58.069 12:35:46 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:58.069 12:35:46 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.069 12:35:46 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.069 12:35:46 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.331 12:35:46 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.331 12:35:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.331 12:35:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.331 12:35:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.331 12:35:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.331 12:35:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.331 12:35:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.331 12:35:46 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.331 12:35:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.331 12:35:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.331 12:35:46 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.331 12:35:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.331 12:35:46 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.331 12:35:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.331 12:35:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.331 12:35:46 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.331 12:35:46 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.331 12:35:46 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.331 --rc genhtml_branch_coverage=1 00:04:58.331 --rc genhtml_function_coverage=1 00:04:58.331 --rc genhtml_legend=1 00:04:58.331 --rc geninfo_all_blocks=1 00:04:58.331 --rc geninfo_unexecuted_blocks=1 00:04:58.331 00:04:58.331 ' 00:04:58.331 12:35:46 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.331 --rc genhtml_branch_coverage=1 00:04:58.331 --rc genhtml_function_coverage=1 00:04:58.331 --rc genhtml_legend=1 00:04:58.331 --rc geninfo_all_blocks=1 00:04:58.331 --rc geninfo_unexecuted_blocks=1 00:04:58.331 00:04:58.331 ' 00:04:58.331 12:35:46 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.331 --rc genhtml_branch_coverage=1 00:04:58.331 --rc genhtml_function_coverage=1 00:04:58.331 --rc genhtml_legend=1 00:04:58.331 --rc geninfo_all_blocks=1 00:04:58.331 --rc geninfo_unexecuted_blocks=1 00:04:58.331 00:04:58.331 ' 00:04:58.331 12:35:46 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.331 --rc genhtml_branch_coverage=1 00:04:58.331 --rc genhtml_function_coverage=1 00:04:58.331 --rc genhtml_legend=1 00:04:58.331 --rc geninfo_all_blocks=1 00:04:58.331 --rc geninfo_unexecuted_blocks=1 00:04:58.331 00:04:58.331 ' 00:04:58.331 12:35:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7b06248-f3b3-4d29-8cee-a1767ec92231 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a7b06248-f3b3-4d29-8cee-a1767ec92231 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.331 12:35:46 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.331 12:35:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.331 12:35:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.331 12:35:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.331 12:35:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.331 12:35:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.331 12:35:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.331 12:35:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.332 12:35:46 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.332 12:35:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.332 12:35:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.332 WARNING: No tests are enabled so not running JSON configuration tests 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:58.332 12:35:46 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:58.332 ************************************ 00:04:58.332 END TEST json_config 00:04:58.332 ************************************ 00:04:58.332 00:04:58.332 real 0m0.196s 00:04:58.332 user 0m0.116s 00:04:58.332 sys 0m0.079s 00:04:58.332 12:35:46 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.332 12:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.332 12:35:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:58.332 12:35:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.332 12:35:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.332 12:35:46 -- common/autotest_common.sh@10 -- # set +x 00:04:58.332 ************************************ 00:04:58.332 START TEST json_config_extra_key 00:04:58.332 ************************************ 00:04:58.332 12:35:46 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:58.332 12:35:46 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.332 12:35:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.332 12:35:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.592 12:35:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.592 12:35:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.592 12:35:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.592 --rc genhtml_branch_coverage=1 00:04:58.592 --rc genhtml_function_coverage=1 00:04:58.592 --rc genhtml_legend=1 00:04:58.592 --rc geninfo_all_blocks=1 00:04:58.592 --rc geninfo_unexecuted_blocks=1 00:04:58.592 00:04:58.592 ' 00:04:58.592 12:35:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.592 --rc genhtml_branch_coverage=1 00:04:58.592 --rc genhtml_function_coverage=1 00:04:58.592 --rc genhtml_legend=1 00:04:58.592 --rc geninfo_all_blocks=1 00:04:58.592 --rc geninfo_unexecuted_blocks=1 00:04:58.592 00:04:58.592 ' 00:04:58.592 12:35:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.592 --rc genhtml_branch_coverage=1 00:04:58.592 --rc genhtml_function_coverage=1 00:04:58.592 --rc genhtml_legend=1 00:04:58.592 --rc geninfo_all_blocks=1 00:04:58.592 --rc geninfo_unexecuted_blocks=1 00:04:58.592 00:04:58.592 ' 00:04:58.592 12:35:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.592 --rc genhtml_branch_coverage=1 00:04:58.592 --rc genhtml_function_coverage=1 00:04:58.592 --rc genhtml_legend=1 00:04:58.592 --rc geninfo_all_blocks=1 00:04:58.592 --rc geninfo_unexecuted_blocks=1 00:04:58.592 00:04:58.592 ' 00:04:58.592 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a7b06248-f3b3-4d29-8cee-a1767ec92231 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a7b06248-f3b3-4d29-8cee-a1767ec92231 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.592 12:35:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.592 12:35:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.593 12:35:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.593 12:35:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.593 12:35:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.593 12:35:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.593 12:35:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.593 12:35:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:58.593 INFO: launching applications... 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.593 12:35:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57596 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.593 Waiting for target to run... 00:04:58.593 12:35:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57596 /var/tmp/spdk_tgt.sock 00:04:58.593 12:35:47 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57596 ']' 00:04:58.593 12:35:47 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.593 12:35:47 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.593 12:35:47 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.593 12:35:47 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.593 12:35:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.593 [2024-11-06 12:35:47.192438] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:04:58.593 [2024-11-06 12:35:47.192876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57596 ] 00:04:59.160 [2024-11-06 12:35:47.676007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.419 [2024-11-06 12:35:47.821090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.986 00:04:59.986 INFO: shutting down applications... 00:04:59.986 12:35:48 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.986 12:35:48 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.986 12:35:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.986 12:35:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57596 ]] 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57596 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:59.986 12:35:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.554 12:35:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.554 12:35:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.554 12:35:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:05:00.554 12:35:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.183 12:35:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.183 12:35:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.183 12:35:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:05:01.183 12:35:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.443 12:35:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.443 12:35:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.443 12:35:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:05:01.443 12:35:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.009 12:35:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.009 12:35:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.009 12:35:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:05:02.009 12:35:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.576 12:35:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.576 12:35:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.576 12:35:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:05:02.576 12:35:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:05:03.143 SPDK target shutdown done 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.143 12:35:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.143 Success 00:05:03.143 12:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.143 ************************************ 00:05:03.143 END TEST json_config_extra_key 00:05:03.143 ************************************ 00:05:03.144 00:05:03.144 real 0m4.652s 00:05:03.144 user 0m4.044s 00:05:03.144 sys 0m0.651s 00:05:03.144 12:35:51 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.144 12:35:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.144 12:35:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.144 12:35:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.144 12:35:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.144 12:35:51 -- common/autotest_common.sh@10 -- # set +x 00:05:03.144 ************************************ 00:05:03.144 START TEST alias_rpc 00:05:03.144 ************************************ 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.144 * Looking for test storage... 00:05:03.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.144 12:35:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.144 --rc genhtml_branch_coverage=1 00:05:03.144 --rc genhtml_function_coverage=1 00:05:03.144 --rc genhtml_legend=1 00:05:03.144 --rc geninfo_all_blocks=1 00:05:03.144 --rc geninfo_unexecuted_blocks=1 00:05:03.144 00:05:03.144 ' 00:05:03.144 12:35:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:03.144 12:35:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57713 00:05:03.144 12:35:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.144 12:35:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57713 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57713 ']' 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.144 12:35:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.403 [2024-11-06 12:35:51.887248] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:03.403 [2024-11-06 12:35:51.887649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57713 ] 00:05:03.660 [2024-11-06 12:35:52.063429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.660 [2024-11-06 12:35:52.196670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.616 12:35:53 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.616 12:35:53 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:04.616 12:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:04.875 12:35:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57713 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57713 ']' 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57713 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57713 00:05:04.875 killing process with pid 57713 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57713' 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@971 -- # kill 57713 00:05:04.875 12:35:53 alias_rpc -- common/autotest_common.sh@976 -- # wait 57713 00:05:07.434 ************************************ 00:05:07.434 END TEST alias_rpc 00:05:07.434 ************************************ 00:05:07.434 00:05:07.434 real 0m4.129s 00:05:07.434 user 0m4.311s 00:05:07.434 sys 0m0.616s 00:05:07.434 12:35:55 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.434 12:35:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 12:35:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:07.434 12:35:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:07.434 12:35:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.434 12:35:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.434 12:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:07.434 ************************************ 00:05:07.434 START TEST spdkcli_tcp 00:05:07.434 ************************************ 00:05:07.434 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:07.434 * Looking for test storage... 00:05:07.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.435 12:35:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.435 --rc genhtml_branch_coverage=1 00:05:07.435 --rc genhtml_function_coverage=1 00:05:07.435 --rc genhtml_legend=1 00:05:07.435 --rc geninfo_all_blocks=1 00:05:07.435 --rc geninfo_unexecuted_blocks=1 00:05:07.435 00:05:07.435 ' 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.435 --rc genhtml_branch_coverage=1 00:05:07.435 --rc genhtml_function_coverage=1 00:05:07.435 --rc genhtml_legend=1 00:05:07.435 --rc geninfo_all_blocks=1 00:05:07.435 --rc geninfo_unexecuted_blocks=1 00:05:07.435 00:05:07.435 ' 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.435 --rc genhtml_branch_coverage=1 00:05:07.435 --rc genhtml_function_coverage=1 00:05:07.435 --rc genhtml_legend=1 00:05:07.435 --rc geninfo_all_blocks=1 00:05:07.435 --rc geninfo_unexecuted_blocks=1 00:05:07.435 00:05:07.435 ' 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.435 --rc genhtml_branch_coverage=1 00:05:07.435 --rc genhtml_function_coverage=1 00:05:07.435 --rc genhtml_legend=1 00:05:07.435 --rc geninfo_all_blocks=1 00:05:07.435 --rc geninfo_unexecuted_blocks=1 00:05:07.435 00:05:07.435 ' 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57820 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:07.435 12:35:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57820 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57820 ']' 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.435 12:35:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.435 [2024-11-06 12:35:56.087262] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:07.435 [2024-11-06 12:35:56.087692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57820 ] 00:05:07.694 [2024-11-06 12:35:56.274059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.953 [2024-11-06 12:35:56.437414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.953 [2024-11-06 12:35:56.437419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.890 12:35:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.890 12:35:57 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:08.890 12:35:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57837 00:05:08.890 12:35:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:08.890 12:35:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.149 [ 00:05:09.149 "bdev_malloc_delete", 00:05:09.149 "bdev_malloc_create", 00:05:09.149 "bdev_null_resize", 00:05:09.149 "bdev_null_delete", 00:05:09.149 "bdev_null_create", 00:05:09.149 "bdev_nvme_cuse_unregister", 00:05:09.149 "bdev_nvme_cuse_register", 00:05:09.149 "bdev_opal_new_user", 00:05:09.149 "bdev_opal_set_lock_state", 00:05:09.149 "bdev_opal_delete", 00:05:09.149 "bdev_opal_get_info", 00:05:09.149 "bdev_opal_create", 00:05:09.149 "bdev_nvme_opal_revert", 00:05:09.149 "bdev_nvme_opal_init", 00:05:09.149 "bdev_nvme_send_cmd", 00:05:09.149 "bdev_nvme_set_keys", 00:05:09.149 "bdev_nvme_get_path_iostat", 00:05:09.149 "bdev_nvme_get_mdns_discovery_info", 00:05:09.149 "bdev_nvme_stop_mdns_discovery", 00:05:09.149 "bdev_nvme_start_mdns_discovery", 00:05:09.149 "bdev_nvme_set_multipath_policy", 00:05:09.149 "bdev_nvme_set_preferred_path", 00:05:09.149 "bdev_nvme_get_io_paths", 00:05:09.149 "bdev_nvme_remove_error_injection", 00:05:09.149 "bdev_nvme_add_error_injection", 00:05:09.149 "bdev_nvme_get_discovery_info", 00:05:09.149 "bdev_nvme_stop_discovery", 00:05:09.149 "bdev_nvme_start_discovery", 00:05:09.149 "bdev_nvme_get_controller_health_info", 00:05:09.149 "bdev_nvme_disable_controller", 00:05:09.149 "bdev_nvme_enable_controller", 00:05:09.149 "bdev_nvme_reset_controller", 00:05:09.149 "bdev_nvme_get_transport_statistics", 00:05:09.149 "bdev_nvme_apply_firmware", 00:05:09.149 "bdev_nvme_detach_controller", 00:05:09.149 "bdev_nvme_get_controllers", 00:05:09.149 "bdev_nvme_attach_controller", 00:05:09.149 "bdev_nvme_set_hotplug", 00:05:09.149 "bdev_nvme_set_options", 00:05:09.149 "bdev_passthru_delete", 00:05:09.149 "bdev_passthru_create", 00:05:09.149 "bdev_lvol_set_parent_bdev", 00:05:09.149 "bdev_lvol_set_parent", 00:05:09.149 "bdev_lvol_check_shallow_copy", 00:05:09.149 "bdev_lvol_start_shallow_copy", 00:05:09.149 "bdev_lvol_grow_lvstore", 00:05:09.149 "bdev_lvol_get_lvols", 00:05:09.149 "bdev_lvol_get_lvstores", 00:05:09.149 "bdev_lvol_delete", 00:05:09.149 "bdev_lvol_set_read_only", 00:05:09.149 "bdev_lvol_resize", 00:05:09.149 "bdev_lvol_decouple_parent", 00:05:09.149 "bdev_lvol_inflate", 00:05:09.149 "bdev_lvol_rename", 00:05:09.149 "bdev_lvol_clone_bdev", 00:05:09.149 "bdev_lvol_clone", 00:05:09.149 "bdev_lvol_snapshot", 00:05:09.149 "bdev_lvol_create", 00:05:09.150 "bdev_lvol_delete_lvstore", 00:05:09.150 "bdev_lvol_rename_lvstore", 00:05:09.150 "bdev_lvol_create_lvstore", 00:05:09.150 "bdev_raid_set_options", 00:05:09.150 "bdev_raid_remove_base_bdev", 00:05:09.150 "bdev_raid_add_base_bdev", 00:05:09.150 "bdev_raid_delete", 00:05:09.150 "bdev_raid_create", 00:05:09.150 "bdev_raid_get_bdevs", 00:05:09.150 "bdev_error_inject_error", 00:05:09.150 "bdev_error_delete", 00:05:09.150 "bdev_error_create", 00:05:09.150 "bdev_split_delete", 00:05:09.150 "bdev_split_create", 00:05:09.150 "bdev_delay_delete", 00:05:09.150 "bdev_delay_create", 00:05:09.150 "bdev_delay_update_latency", 00:05:09.150 "bdev_zone_block_delete", 00:05:09.150 "bdev_zone_block_create", 00:05:09.150 "blobfs_create", 00:05:09.150 "blobfs_detect", 00:05:09.150 "blobfs_set_cache_size", 00:05:09.150 "bdev_aio_delete", 00:05:09.150 "bdev_aio_rescan", 00:05:09.150 "bdev_aio_create", 00:05:09.150 "bdev_ftl_set_property", 00:05:09.150 "bdev_ftl_get_properties", 00:05:09.150 "bdev_ftl_get_stats", 00:05:09.150 "bdev_ftl_unmap", 00:05:09.150 "bdev_ftl_unload", 00:05:09.150 "bdev_ftl_delete", 00:05:09.150 "bdev_ftl_load", 00:05:09.150 "bdev_ftl_create", 00:05:09.150 "bdev_virtio_attach_controller", 00:05:09.150 "bdev_virtio_scsi_get_devices", 00:05:09.150 "bdev_virtio_detach_controller", 00:05:09.150 "bdev_virtio_blk_set_hotplug", 00:05:09.150 "bdev_iscsi_delete", 00:05:09.150 "bdev_iscsi_create", 00:05:09.150 "bdev_iscsi_set_options", 00:05:09.150 "accel_error_inject_error", 00:05:09.150 "ioat_scan_accel_module", 00:05:09.150 "dsa_scan_accel_module", 00:05:09.150 "iaa_scan_accel_module", 00:05:09.150 "keyring_file_remove_key", 00:05:09.150 "keyring_file_add_key", 00:05:09.150 "keyring_linux_set_options", 00:05:09.150 "fsdev_aio_delete", 00:05:09.150 "fsdev_aio_create", 00:05:09.150 "iscsi_get_histogram", 00:05:09.150 "iscsi_enable_histogram", 00:05:09.150 "iscsi_set_options", 00:05:09.150 "iscsi_get_auth_groups", 00:05:09.150 "iscsi_auth_group_remove_secret", 00:05:09.150 "iscsi_auth_group_add_secret", 00:05:09.150 "iscsi_delete_auth_group", 00:05:09.150 "iscsi_create_auth_group", 00:05:09.150 "iscsi_set_discovery_auth", 00:05:09.150 "iscsi_get_options", 00:05:09.150 "iscsi_target_node_request_logout", 00:05:09.150 "iscsi_target_node_set_redirect", 00:05:09.150 "iscsi_target_node_set_auth", 00:05:09.150 "iscsi_target_node_add_lun", 00:05:09.150 "iscsi_get_stats", 00:05:09.150 "iscsi_get_connections", 00:05:09.150 "iscsi_portal_group_set_auth", 00:05:09.150 "iscsi_start_portal_group", 00:05:09.150 "iscsi_delete_portal_group", 00:05:09.150 "iscsi_create_portal_group", 00:05:09.150 "iscsi_get_portal_groups", 00:05:09.150 "iscsi_delete_target_node", 00:05:09.150 "iscsi_target_node_remove_pg_ig_maps", 00:05:09.150 "iscsi_target_node_add_pg_ig_maps", 00:05:09.150 "iscsi_create_target_node", 00:05:09.150 "iscsi_get_target_nodes", 00:05:09.150 "iscsi_delete_initiator_group", 00:05:09.150 "iscsi_initiator_group_remove_initiators", 00:05:09.150 "iscsi_initiator_group_add_initiators", 00:05:09.150 "iscsi_create_initiator_group", 00:05:09.150 "iscsi_get_initiator_groups", 00:05:09.150 "nvmf_set_crdt", 00:05:09.150 "nvmf_set_config", 00:05:09.150 "nvmf_set_max_subsystems", 00:05:09.150 "nvmf_stop_mdns_prr", 00:05:09.150 "nvmf_publish_mdns_prr", 00:05:09.150 "nvmf_subsystem_get_listeners", 00:05:09.150 "nvmf_subsystem_get_qpairs", 00:05:09.150 "nvmf_subsystem_get_controllers", 00:05:09.150 "nvmf_get_stats", 00:05:09.150 "nvmf_get_transports", 00:05:09.150 "nvmf_create_transport", 00:05:09.150 "nvmf_get_targets", 00:05:09.150 "nvmf_delete_target", 00:05:09.150 "nvmf_create_target", 00:05:09.150 "nvmf_subsystem_allow_any_host", 00:05:09.150 "nvmf_subsystem_set_keys", 00:05:09.150 "nvmf_subsystem_remove_host", 00:05:09.150 "nvmf_subsystem_add_host", 00:05:09.150 "nvmf_ns_remove_host", 00:05:09.150 "nvmf_ns_add_host", 00:05:09.150 "nvmf_subsystem_remove_ns", 00:05:09.150 "nvmf_subsystem_set_ns_ana_group", 00:05:09.150 "nvmf_subsystem_add_ns", 00:05:09.150 "nvmf_subsystem_listener_set_ana_state", 00:05:09.150 "nvmf_discovery_get_referrals", 00:05:09.150 "nvmf_discovery_remove_referral", 00:05:09.150 "nvmf_discovery_add_referral", 00:05:09.150 "nvmf_subsystem_remove_listener", 00:05:09.150 "nvmf_subsystem_add_listener", 00:05:09.150 "nvmf_delete_subsystem", 00:05:09.150 "nvmf_create_subsystem", 00:05:09.150 "nvmf_get_subsystems", 00:05:09.150 "env_dpdk_get_mem_stats", 00:05:09.150 "nbd_get_disks", 00:05:09.150 "nbd_stop_disk", 00:05:09.150 "nbd_start_disk", 00:05:09.150 "ublk_recover_disk", 00:05:09.150 "ublk_get_disks", 00:05:09.150 "ublk_stop_disk", 00:05:09.150 "ublk_start_disk", 00:05:09.150 "ublk_destroy_target", 00:05:09.150 "ublk_create_target", 00:05:09.150 "virtio_blk_create_transport", 00:05:09.150 "virtio_blk_get_transports", 00:05:09.150 "vhost_controller_set_coalescing", 00:05:09.150 "vhost_get_controllers", 00:05:09.150 "vhost_delete_controller", 00:05:09.150 "vhost_create_blk_controller", 00:05:09.150 "vhost_scsi_controller_remove_target", 00:05:09.150 "vhost_scsi_controller_add_target", 00:05:09.150 "vhost_start_scsi_controller", 00:05:09.150 "vhost_create_scsi_controller", 00:05:09.150 "thread_set_cpumask", 00:05:09.150 "scheduler_set_options", 00:05:09.150 "framework_get_governor", 00:05:09.150 "framework_get_scheduler", 00:05:09.150 "framework_set_scheduler", 00:05:09.150 "framework_get_reactors", 00:05:09.150 "thread_get_io_channels", 00:05:09.150 "thread_get_pollers", 00:05:09.150 "thread_get_stats", 00:05:09.150 "framework_monitor_context_switch", 00:05:09.150 "spdk_kill_instance", 00:05:09.150 "log_enable_timestamps", 00:05:09.151 "log_get_flags", 00:05:09.151 "log_clear_flag", 00:05:09.151 "log_set_flag", 00:05:09.151 "log_get_level", 00:05:09.151 "log_set_level", 00:05:09.151 "log_get_print_level", 00:05:09.151 "log_set_print_level", 00:05:09.151 "framework_enable_cpumask_locks", 00:05:09.151 "framework_disable_cpumask_locks", 00:05:09.151 "framework_wait_init", 00:05:09.151 "framework_start_init", 00:05:09.151 "scsi_get_devices", 00:05:09.151 "bdev_get_histogram", 00:05:09.151 "bdev_enable_histogram", 00:05:09.151 "bdev_set_qos_limit", 00:05:09.151 "bdev_set_qd_sampling_period", 00:05:09.151 "bdev_get_bdevs", 00:05:09.151 "bdev_reset_iostat", 00:05:09.151 "bdev_get_iostat", 00:05:09.151 "bdev_examine", 00:05:09.151 "bdev_wait_for_examine", 00:05:09.151 "bdev_set_options", 00:05:09.151 "accel_get_stats", 00:05:09.151 "accel_set_options", 00:05:09.151 "accel_set_driver", 00:05:09.151 "accel_crypto_key_destroy", 00:05:09.151 "accel_crypto_keys_get", 00:05:09.151 "accel_crypto_key_create", 00:05:09.151 "accel_assign_opc", 00:05:09.151 "accel_get_module_info", 00:05:09.151 "accel_get_opc_assignments", 00:05:09.151 "vmd_rescan", 00:05:09.151 "vmd_remove_device", 00:05:09.151 "vmd_enable", 00:05:09.151 "sock_get_default_impl", 00:05:09.151 "sock_set_default_impl", 00:05:09.151 "sock_impl_set_options", 00:05:09.151 "sock_impl_get_options", 00:05:09.151 "iobuf_get_stats", 00:05:09.151 "iobuf_set_options", 00:05:09.151 "keyring_get_keys", 00:05:09.151 "framework_get_pci_devices", 00:05:09.151 "framework_get_config", 00:05:09.151 "framework_get_subsystems", 00:05:09.151 "fsdev_set_opts", 00:05:09.151 "fsdev_get_opts", 00:05:09.151 "trace_get_info", 00:05:09.151 "trace_get_tpoint_group_mask", 00:05:09.151 "trace_disable_tpoint_group", 00:05:09.151 "trace_enable_tpoint_group", 00:05:09.151 "trace_clear_tpoint_mask", 00:05:09.151 "trace_set_tpoint_mask", 00:05:09.151 "notify_get_notifications", 00:05:09.151 "notify_get_types", 00:05:09.151 "spdk_get_version", 00:05:09.151 "rpc_get_methods" 00:05:09.151 ] 00:05:09.151 12:35:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 12:35:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:09.151 12:35:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57820 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57820 ']' 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57820 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57820 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.151 killing process with pid 57820 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57820' 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57820 00:05:09.151 12:35:57 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57820 00:05:11.677 ************************************ 00:05:11.677 END TEST spdkcli_tcp 00:05:11.677 ************************************ 00:05:11.677 00:05:11.677 real 0m4.153s 00:05:11.677 user 0m7.589s 00:05:11.677 sys 0m0.683s 00:05:11.677 12:35:59 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.677 12:35:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.677 12:35:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.677 12:35:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.677 12:35:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.677 12:35:59 -- common/autotest_common.sh@10 -- # set +x 00:05:11.677 ************************************ 00:05:11.677 START TEST dpdk_mem_utility 00:05:11.677 ************************************ 00:05:11.677 12:35:59 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.677 * Looking for test storage... 00:05:11.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:11.677 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.677 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.677 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.677 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.677 12:36:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.678 12:36:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.678 --rc genhtml_branch_coverage=1 00:05:11.678 --rc genhtml_function_coverage=1 00:05:11.678 --rc genhtml_legend=1 00:05:11.678 --rc geninfo_all_blocks=1 00:05:11.678 --rc geninfo_unexecuted_blocks=1 00:05:11.678 00:05:11.678 ' 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.678 --rc genhtml_branch_coverage=1 00:05:11.678 --rc genhtml_function_coverage=1 00:05:11.678 --rc genhtml_legend=1 00:05:11.678 --rc geninfo_all_blocks=1 00:05:11.678 --rc geninfo_unexecuted_blocks=1 00:05:11.678 00:05:11.678 ' 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.678 --rc genhtml_branch_coverage=1 00:05:11.678 --rc genhtml_function_coverage=1 00:05:11.678 --rc genhtml_legend=1 00:05:11.678 --rc geninfo_all_blocks=1 00:05:11.678 --rc geninfo_unexecuted_blocks=1 00:05:11.678 00:05:11.678 ' 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.678 --rc genhtml_branch_coverage=1 00:05:11.678 --rc genhtml_function_coverage=1 00:05:11.678 --rc genhtml_legend=1 00:05:11.678 --rc geninfo_all_blocks=1 00:05:11.678 --rc geninfo_unexecuted_blocks=1 00:05:11.678 00:05:11.678 ' 00:05:11.678 12:36:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:11.678 12:36:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57942 00:05:11.678 12:36:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57942 00:05:11.678 12:36:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57942 ']' 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.678 12:36:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.678 [2024-11-06 12:36:00.288241] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:11.678 [2024-11-06 12:36:00.288399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57942 ] 00:05:11.937 [2024-11-06 12:36:00.464122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.937 [2024-11-06 12:36:00.591816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.870 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.870 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:12.870 12:36:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.870 12:36:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.870 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.870 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.870 { 00:05:12.870 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.870 } 00:05:12.870 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.870 12:36:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.870 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:12.870 1 heaps totaling size 816.000000 MiB 00:05:12.870 size: 816.000000 MiB heap id: 0 00:05:12.870 end heaps---------- 00:05:12.870 9 mempools totaling size 595.772034 MiB 00:05:12.870 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.870 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.870 size: 92.545471 MiB name: bdev_io_57942 00:05:12.870 size: 50.003479 MiB name: msgpool_57942 00:05:12.870 size: 36.509338 MiB name: fsdev_io_57942 00:05:12.870 size: 21.763794 MiB name: PDU_Pool 00:05:12.870 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.870 size: 4.133484 MiB name: evtpool_57942 00:05:12.870 size: 0.026123 MiB name: Session_Pool 00:05:12.870 end mempools------- 00:05:12.870 6 memzones totaling size 4.142822 MiB 00:05:12.870 size: 1.000366 MiB name: RG_ring_0_57942 00:05:12.870 size: 1.000366 MiB name: RG_ring_1_57942 00:05:12.870 size: 1.000366 MiB name: RG_ring_4_57942 00:05:12.870 size: 1.000366 MiB name: RG_ring_5_57942 00:05:12.870 size: 0.125366 MiB name: RG_ring_2_57942 00:05:12.870 size: 0.015991 MiB name: RG_ring_3_57942 00:05:12.870 end memzones------- 00:05:12.870 12:36:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.132 heap id: 0 total size: 816.000000 MiB number of busy elements: 310 number of free elements: 18 00:05:13.132 list of free elements. size: 16.792603 MiB 00:05:13.132 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:13.132 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:13.132 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:13.132 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:13.132 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:13.132 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:13.132 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:13.132 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:13.132 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:13.132 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:13.132 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:13.132 element at address: 0x20001ac00000 with size: 0.562927 MiB 00:05:13.132 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:13.132 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:13.132 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:13.132 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:13.132 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:13.132 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:13.132 list of standard malloc elements. size: 199.286499 MiB 00:05:13.132 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:13.132 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:13.132 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:13.132 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:13.132 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:13.132 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.132 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:13.132 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.132 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:13.132 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:13.132 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:13.132 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:13.132 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:13.132 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:13.132 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:13.133 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:13.133 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:13.133 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:13.133 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:13.134 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:13.134 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:13.134 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:13.134 list of memzone associated elements. size: 599.920898 MiB 00:05:13.134 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:13.134 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.134 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:13.134 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.134 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:13.134 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57942_0 00:05:13.134 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:13.134 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57942_0 00:05:13.134 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:13.134 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57942_0 00:05:13.134 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:13.134 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.134 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:13.134 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.134 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:13.134 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57942_0 00:05:13.134 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:13.134 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57942 00:05:13.134 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.134 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57942 00:05:13.134 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:13.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.134 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:13.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.134 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:13.134 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.134 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:13.134 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.134 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:13.134 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57942 00:05:13.134 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:13.134 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57942 00:05:13.134 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:13.134 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57942 00:05:13.134 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:13.135 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57942 00:05:13.135 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:13.135 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57942 00:05:13.135 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:13.135 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57942 00:05:13.135 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:13.135 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.135 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:13.135 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.135 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:13.135 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.135 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:13.135 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57942 00:05:13.135 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:13.135 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57942 00:05:13.135 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:13.135 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.135 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:13.135 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.135 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:13.135 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57942 00:05:13.135 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:13.135 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.135 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:13.135 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57942 00:05:13.135 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:13.135 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57942 00:05:13.135 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:13.135 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57942 00:05:13.135 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:13.135 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.135 12:36:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.135 12:36:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57942 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57942 ']' 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57942 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57942 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57942' 00:05:13.135 killing process with pid 57942 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57942 00:05:13.135 12:36:01 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57942 00:05:15.675 00:05:15.675 real 0m3.839s 00:05:15.675 user 0m3.843s 00:05:15.675 sys 0m0.606s 00:05:15.675 12:36:03 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.675 ************************************ 00:05:15.675 END TEST dpdk_mem_utility 00:05:15.675 12:36:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.675 ************************************ 00:05:15.675 12:36:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.675 12:36:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.675 12:36:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.675 12:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:15.675 ************************************ 00:05:15.675 START TEST event 00:05:15.675 ************************************ 00:05:15.675 12:36:03 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.675 * Looking for test storage... 00:05:15.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.675 12:36:03 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.675 12:36:03 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.675 12:36:03 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.675 12:36:03 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.675 12:36:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.675 12:36:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.675 12:36:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.675 12:36:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.675 12:36:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.675 12:36:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.675 12:36:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.675 12:36:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.675 12:36:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.675 12:36:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.675 12:36:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.675 12:36:03 event -- scripts/common.sh@344 -- # case "$op" in 00:05:15.675 12:36:03 event -- scripts/common.sh@345 -- # : 1 00:05:15.675 12:36:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.675 12:36:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.675 12:36:03 event -- scripts/common.sh@365 -- # decimal 1 00:05:15.675 12:36:04 event -- scripts/common.sh@353 -- # local d=1 00:05:15.675 12:36:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.675 12:36:04 event -- scripts/common.sh@355 -- # echo 1 00:05:15.675 12:36:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.675 12:36:04 event -- scripts/common.sh@366 -- # decimal 2 00:05:15.675 12:36:04 event -- scripts/common.sh@353 -- # local d=2 00:05:15.675 12:36:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.675 12:36:04 event -- scripts/common.sh@355 -- # echo 2 00:05:15.675 12:36:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.675 12:36:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.675 12:36:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.675 12:36:04 event -- scripts/common.sh@368 -- # return 0 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.675 --rc genhtml_branch_coverage=1 00:05:15.675 --rc genhtml_function_coverage=1 00:05:15.675 --rc genhtml_legend=1 00:05:15.675 --rc geninfo_all_blocks=1 00:05:15.675 --rc geninfo_unexecuted_blocks=1 00:05:15.675 00:05:15.675 ' 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.675 --rc genhtml_branch_coverage=1 00:05:15.675 --rc genhtml_function_coverage=1 00:05:15.675 --rc genhtml_legend=1 00:05:15.675 --rc geninfo_all_blocks=1 00:05:15.675 --rc geninfo_unexecuted_blocks=1 00:05:15.675 00:05:15.675 ' 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.675 --rc genhtml_branch_coverage=1 00:05:15.675 --rc genhtml_function_coverage=1 00:05:15.675 --rc genhtml_legend=1 00:05:15.675 --rc geninfo_all_blocks=1 00:05:15.675 --rc geninfo_unexecuted_blocks=1 00:05:15.675 00:05:15.675 ' 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.675 --rc genhtml_branch_coverage=1 00:05:15.675 --rc genhtml_function_coverage=1 00:05:15.675 --rc genhtml_legend=1 00:05:15.675 --rc geninfo_all_blocks=1 00:05:15.675 --rc geninfo_unexecuted_blocks=1 00:05:15.675 00:05:15.675 ' 00:05:15.675 12:36:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:15.675 12:36:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.675 12:36:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:15.675 12:36:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.675 12:36:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.675 ************************************ 00:05:15.675 START TEST event_perf 00:05:15.675 ************************************ 00:05:15.675 12:36:04 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.675 Running I/O for 1 seconds...[2024-11-06 12:36:04.074684] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:15.675 [2024-11-06 12:36:04.075314] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58045 ] 00:05:15.675 [2024-11-06 12:36:04.259621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.932 [2024-11-06 12:36:04.396258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.932 [2024-11-06 12:36:04.396329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.932 [2024-11-06 12:36:04.396952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.932 Running I/O for 1 seconds...[2024-11-06 12:36:04.397311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.307 00:05:17.307 lcore 0: 126171 00:05:17.307 lcore 1: 126171 00:05:17.307 lcore 2: 126172 00:05:17.307 lcore 3: 126173 00:05:17.307 done. 00:05:17.307 00:05:17.307 real 0m1.615s 00:05:17.307 user 0m4.367s 00:05:17.307 sys 0m0.117s 00:05:17.307 ************************************ 00:05:17.307 END TEST event_perf 00:05:17.307 ************************************ 00:05:17.307 12:36:05 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.307 12:36:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.307 12:36:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.307 12:36:05 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:17.307 12:36:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.307 12:36:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.307 ************************************ 00:05:17.307 START TEST event_reactor 00:05:17.307 ************************************ 00:05:17.307 12:36:05 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.307 [2024-11-06 12:36:05.737441] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:17.307 [2024-11-06 12:36:05.737630] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58084 ] 00:05:17.307 [2024-11-06 12:36:05.922259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.564 [2024-11-06 12:36:06.055205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.939 test_start 00:05:18.939 oneshot 00:05:18.939 tick 100 00:05:18.939 tick 100 00:05:18.939 tick 250 00:05:18.939 tick 100 00:05:18.939 tick 100 00:05:18.939 tick 250 00:05:18.939 tick 100 00:05:18.939 tick 500 00:05:18.939 tick 100 00:05:18.939 tick 100 00:05:18.939 tick 250 00:05:18.939 tick 100 00:05:18.939 tick 100 00:05:18.939 test_end 00:05:18.939 ************************************ 00:05:18.939 END TEST event_reactor 00:05:18.939 ************************************ 00:05:18.939 00:05:18.939 real 0m1.594s 00:05:18.939 user 0m1.388s 00:05:18.939 sys 0m0.095s 00:05:18.939 12:36:07 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.939 12:36:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.939 12:36:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.939 12:36:07 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:18.939 12:36:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.939 12:36:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.939 ************************************ 00:05:18.939 START TEST event_reactor_perf 00:05:18.939 ************************************ 00:05:18.939 12:36:07 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.939 [2024-11-06 12:36:07.372730] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:18.939 [2024-11-06 12:36:07.372875] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58126 ] 00:05:18.939 [2024-11-06 12:36:07.543587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.197 [2024-11-06 12:36:07.671780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.571 test_start 00:05:20.571 test_end 00:05:20.571 Performance: 283310 events per second 00:05:20.571 ************************************ 00:05:20.571 END TEST event_reactor_perf 00:05:20.571 ************************************ 00:05:20.571 00:05:20.571 real 0m1.561s 00:05:20.571 user 0m1.367s 00:05:20.571 sys 0m0.085s 00:05:20.571 12:36:08 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.571 12:36:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.571 12:36:08 event -- event/event.sh@49 -- # uname -s 00:05:20.571 12:36:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.571 12:36:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.571 12:36:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.571 12:36:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.571 12:36:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.571 ************************************ 00:05:20.571 START TEST event_scheduler 00:05:20.571 ************************************ 00:05:20.571 12:36:08 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.571 * Looking for test storage... 00:05:20.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:20.571 12:36:09 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.571 12:36:09 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.571 12:36:09 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.571 12:36:09 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.571 12:36:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.572 --rc genhtml_branch_coverage=1 00:05:20.572 --rc genhtml_function_coverage=1 00:05:20.572 --rc genhtml_legend=1 00:05:20.572 --rc geninfo_all_blocks=1 00:05:20.572 --rc geninfo_unexecuted_blocks=1 00:05:20.572 00:05:20.572 ' 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.572 --rc genhtml_branch_coverage=1 00:05:20.572 --rc genhtml_function_coverage=1 00:05:20.572 --rc genhtml_legend=1 00:05:20.572 --rc geninfo_all_blocks=1 00:05:20.572 --rc geninfo_unexecuted_blocks=1 00:05:20.572 00:05:20.572 ' 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.572 --rc genhtml_branch_coverage=1 00:05:20.572 --rc genhtml_function_coverage=1 00:05:20.572 --rc genhtml_legend=1 00:05:20.572 --rc geninfo_all_blocks=1 00:05:20.572 --rc geninfo_unexecuted_blocks=1 00:05:20.572 00:05:20.572 ' 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.572 --rc genhtml_branch_coverage=1 00:05:20.572 --rc genhtml_function_coverage=1 00:05:20.572 --rc genhtml_legend=1 00:05:20.572 --rc geninfo_all_blocks=1 00:05:20.572 --rc geninfo_unexecuted_blocks=1 00:05:20.572 00:05:20.572 ' 00:05:20.572 12:36:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.572 12:36:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58197 00:05:20.572 12:36:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.572 12:36:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.572 12:36:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58197 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58197 ']' 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.572 12:36:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.830 [2024-11-06 12:36:09.245413] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:20.830 [2024-11-06 12:36:09.245841] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58197 ] 00:05:20.830 [2024-11-06 12:36:09.426610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.097 [2024-11-06 12:36:09.563350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.097 [2024-11-06 12:36:09.563530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.097 [2024-11-06 12:36:09.564301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.097 [2024-11-06 12:36:09.564330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:21.668 12:36:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.668 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.668 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.668 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.668 POWER: Cannot set governor of lcore 0 to performance 00:05:21.668 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.668 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.668 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.668 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.668 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:21.668 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:21.668 POWER: Unable to set Power Management Environment for lcore 0 00:05:21.668 [2024-11-06 12:36:10.175002] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:21.668 [2024-11-06 12:36:10.175031] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:21.668 [2024-11-06 12:36:10.175047] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:21.668 [2024-11-06 12:36:10.175073] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:21.668 [2024-11-06 12:36:10.175086] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:21.668 [2024-11-06 12:36:10.175100] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.668 12:36:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.668 12:36:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.927 [2024-11-06 12:36:10.501014] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:21.927 12:36:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.927 12:36:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:21.927 12:36:10 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.927 12:36:10 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.927 12:36:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.927 ************************************ 00:05:21.927 START TEST scheduler_create_thread 00:05:21.927 ************************************ 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.927 2 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.927 3 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.927 4 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:21.927 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.928 5 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.928 6 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.928 7 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.928 8 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.928 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.186 9 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.186 10 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.186 12:36:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.560 12:36:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.560 12:36:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:23.560 12:36:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:23.560 12:36:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.560 12:36:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.496 ************************************ 00:05:24.496 END TEST scheduler_create_thread 00:05:24.496 ************************************ 00:05:24.496 12:36:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.496 00:05:24.496 real 0m2.622s 00:05:24.496 user 0m0.018s 00:05:24.496 sys 0m0.004s 00:05:24.496 12:36:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.496 12:36:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.776 12:36:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.777 12:36:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58197 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58197 ']' 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58197 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58197 00:05:24.777 killing process with pid 58197 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58197' 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58197 00:05:24.777 12:36:13 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58197 00:05:25.035 [2024-11-06 12:36:13.615688] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.412 ************************************ 00:05:26.412 END TEST event_scheduler 00:05:26.412 ************************************ 00:05:26.412 00:05:26.412 real 0m5.740s 00:05:26.412 user 0m9.967s 00:05:26.412 sys 0m0.467s 00:05:26.412 12:36:14 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.412 12:36:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.412 12:36:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.412 12:36:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.412 12:36:14 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.412 12:36:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.412 12:36:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.412 ************************************ 00:05:26.412 START TEST app_repeat 00:05:26.412 ************************************ 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.412 Process app_repeat pid: 58308 00:05:26.412 spdk_app_start Round 0 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58308 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58308' 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.412 12:36:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58308 /var/tmp/spdk-nbd.sock 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58308 ']' 00:05:26.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.412 12:36:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.412 [2024-11-06 12:36:14.795524] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:26.412 [2024-11-06 12:36:14.795697] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58308 ] 00:05:26.412 [2024-11-06 12:36:14.970269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.686 [2024-11-06 12:36:15.111105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.686 [2024-11-06 12:36:15.111108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.252 12:36:15 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.252 12:36:15 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:27.252 12:36:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.817 Malloc0 00:05:27.817 12:36:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.075 Malloc1 00:05:28.075 12:36:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.075 12:36:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.641 /dev/nbd0 00:05:28.641 12:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.641 12:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.641 1+0 records in 00:05:28.641 1+0 records out 00:05:28.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619497 s, 6.6 MB/s 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:28.641 12:36:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.642 12:36:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:28.642 12:36:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:28.642 12:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.642 12:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.642 12:36:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.900 /dev/nbd1 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.900 1+0 records in 00:05:28.900 1+0 records out 00:05:28.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408019 s, 10.0 MB/s 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:28.900 12:36:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.900 12:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.158 { 00:05:29.158 "nbd_device": "/dev/nbd0", 00:05:29.158 "bdev_name": "Malloc0" 00:05:29.158 }, 00:05:29.158 { 00:05:29.158 "nbd_device": "/dev/nbd1", 00:05:29.158 "bdev_name": "Malloc1" 00:05:29.158 } 00:05:29.158 ]' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.158 { 00:05:29.158 "nbd_device": "/dev/nbd0", 00:05:29.158 "bdev_name": "Malloc0" 00:05:29.158 }, 00:05:29.158 { 00:05:29.158 "nbd_device": "/dev/nbd1", 00:05:29.158 "bdev_name": "Malloc1" 00:05:29.158 } 00:05:29.158 ]' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.158 /dev/nbd1' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.158 /dev/nbd1' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.158 256+0 records in 00:05:29.158 256+0 records out 00:05:29.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678251 s, 155 MB/s 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.158 12:36:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.416 256+0 records in 00:05:29.416 256+0 records out 00:05:29.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332273 s, 31.6 MB/s 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.416 256+0 records in 00:05:29.416 256+0 records out 00:05:29.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0389168 s, 26.9 MB/s 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.416 12:36:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.417 12:36:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.675 12:36:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.933 12:36:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.192 12:36:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.192 12:36:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.757 12:36:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.131 [2024-11-06 12:36:20.381301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.131 [2024-11-06 12:36:20.507902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.131 [2024-11-06 12:36:20.507918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.131 [2024-11-06 12:36:20.697044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.131 [2024-11-06 12:36:20.697167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.031 spdk_app_start Round 1 00:05:34.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.031 12:36:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.031 12:36:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.031 12:36:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58308 /var/tmp/spdk-nbd.sock 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58308 ']' 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.031 12:36:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:34.031 12:36:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.648 Malloc0 00:05:34.648 12:36:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.906 Malloc1 00:05:34.906 12:36:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.906 12:36:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.165 /dev/nbd0 00:05:35.165 12:36:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.165 12:36:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.165 1+0 records in 00:05:35.165 1+0 records out 00:05:35.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338486 s, 12.1 MB/s 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:35.165 12:36:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:35.165 12:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.165 12:36:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.165 12:36:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.423 /dev/nbd1 00:05:35.423 12:36:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.423 12:36:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.423 12:36:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:35.423 12:36:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:35.423 12:36:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:35.423 12:36:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:35.423 12:36:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:35.680 12:36:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:35.680 12:36:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:35.680 12:36:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:35.681 12:36:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.681 1+0 records in 00:05:35.681 1+0 records out 00:05:35.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346261 s, 11.8 MB/s 00:05:35.681 12:36:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.681 12:36:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:35.681 12:36:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.681 12:36:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:35.681 12:36:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:35.681 12:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.681 12:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.681 12:36:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.681 12:36:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.681 12:36:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.938 12:36:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.938 { 00:05:35.939 "nbd_device": "/dev/nbd0", 00:05:35.939 "bdev_name": "Malloc0" 00:05:35.939 }, 00:05:35.939 { 00:05:35.939 "nbd_device": "/dev/nbd1", 00:05:35.939 "bdev_name": "Malloc1" 00:05:35.939 } 00:05:35.939 ]' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.939 { 00:05:35.939 "nbd_device": "/dev/nbd0", 00:05:35.939 "bdev_name": "Malloc0" 00:05:35.939 }, 00:05:35.939 { 00:05:35.939 "nbd_device": "/dev/nbd1", 00:05:35.939 "bdev_name": "Malloc1" 00:05:35.939 } 00:05:35.939 ]' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.939 /dev/nbd1' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.939 /dev/nbd1' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.939 256+0 records in 00:05:35.939 256+0 records out 00:05:35.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106911 s, 98.1 MB/s 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.939 256+0 records in 00:05:35.939 256+0 records out 00:05:35.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301488 s, 34.8 MB/s 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.939 256+0 records in 00:05:35.939 256+0 records out 00:05:35.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287569 s, 36.5 MB/s 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.939 12:36:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.197 12:36:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.455 12:36:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.455 12:36:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.712 12:36:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.030 12:36:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.030 12:36:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.289 12:36:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.663 [2024-11-06 12:36:26.929272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.663 [2024-11-06 12:36:27.054476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.663 [2024-11-06 12:36:27.054477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.663 [2024-11-06 12:36:27.243339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.663 [2024-11-06 12:36:27.243419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.618 12:36:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.618 spdk_app_start Round 2 00:05:40.618 12:36:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.618 12:36:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58308 /var/tmp/spdk-nbd.sock 00:05:40.618 12:36:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58308 ']' 00:05:40.618 12:36:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.618 12:36:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.618 12:36:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.618 12:36:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.618 12:36:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.618 12:36:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.618 12:36:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:40.618 12:36:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.876 Malloc0 00:05:40.876 12:36:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.440 Malloc1 00:05:41.440 12:36:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.440 12:36:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.697 /dev/nbd0 00:05:41.697 12:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.697 12:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.697 12:36:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:41.697 12:36:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:41.697 12:36:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.698 1+0 records in 00:05:41.698 1+0 records out 00:05:41.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298295 s, 13.7 MB/s 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:41.698 12:36:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:41.698 12:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.698 12:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.698 12:36:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.955 /dev/nbd1 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.955 1+0 records in 00:05:41.955 1+0 records out 00:05:41.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371713 s, 11.0 MB/s 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:41.955 12:36:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.955 12:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.213 12:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.213 { 00:05:42.213 "nbd_device": "/dev/nbd0", 00:05:42.213 "bdev_name": "Malloc0" 00:05:42.213 }, 00:05:42.213 { 00:05:42.213 "nbd_device": "/dev/nbd1", 00:05:42.213 "bdev_name": "Malloc1" 00:05:42.213 } 00:05:42.213 ]' 00:05:42.470 12:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.471 { 00:05:42.471 "nbd_device": "/dev/nbd0", 00:05:42.471 "bdev_name": "Malloc0" 00:05:42.471 }, 00:05:42.471 { 00:05:42.471 "nbd_device": "/dev/nbd1", 00:05:42.471 "bdev_name": "Malloc1" 00:05:42.471 } 00:05:42.471 ]' 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.471 /dev/nbd1' 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.471 /dev/nbd1' 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.471 256+0 records in 00:05:42.471 256+0 records out 00:05:42.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00797951 s, 131 MB/s 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.471 256+0 records in 00:05:42.471 256+0 records out 00:05:42.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305625 s, 34.3 MB/s 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.471 12:36:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.471 256+0 records in 00:05:42.471 256+0 records out 00:05:42.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0384112 s, 27.3 MB/s 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.471 12:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.728 12:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.985 12:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.243 12:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.500 12:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.500 12:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.500 12:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.500 12:36:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.500 12:36:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.065 12:36:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.997 [2024-11-06 12:36:33.530405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.255 [2024-11-06 12:36:33.657270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.255 [2024-11-06 12:36:33.657265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.255 [2024-11-06 12:36:33.846700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.255 [2024-11-06 12:36:33.846822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.153 12:36:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58308 /var/tmp/spdk-nbd.sock 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58308 ']' 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:47.154 12:36:35 event.app_repeat -- event/event.sh@39 -- # killprocess 58308 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58308 ']' 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58308 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.154 12:36:35 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58308 00:05:47.414 12:36:35 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.414 12:36:35 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.414 killing process with pid 58308 00:05:47.414 12:36:35 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58308' 00:05:47.414 12:36:35 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58308 00:05:47.414 12:36:35 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58308 00:05:48.349 spdk_app_start is called in Round 0. 00:05:48.349 Shutdown signal received, stop current app iteration 00:05:48.349 Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 reinitialization... 00:05:48.349 spdk_app_start is called in Round 1. 00:05:48.349 Shutdown signal received, stop current app iteration 00:05:48.349 Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 reinitialization... 00:05:48.349 spdk_app_start is called in Round 2. 00:05:48.349 Shutdown signal received, stop current app iteration 00:05:48.349 Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 reinitialization... 00:05:48.349 spdk_app_start is called in Round 3. 00:05:48.349 Shutdown signal received, stop current app iteration 00:05:48.349 12:36:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.349 12:36:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.349 00:05:48.349 real 0m22.036s 00:05:48.349 user 0m49.028s 00:05:48.349 sys 0m3.087s 00:05:48.349 12:36:36 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.349 12:36:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.349 ************************************ 00:05:48.349 END TEST app_repeat 00:05:48.349 ************************************ 00:05:48.349 12:36:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.349 12:36:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:48.349 12:36:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.349 12:36:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.350 12:36:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 ************************************ 00:05:48.350 START TEST cpu_locks 00:05:48.350 ************************************ 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:48.350 * Looking for test storage... 00:05:48.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.350 12:36:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.350 --rc genhtml_branch_coverage=1 00:05:48.350 --rc genhtml_function_coverage=1 00:05:48.350 --rc genhtml_legend=1 00:05:48.350 --rc geninfo_all_blocks=1 00:05:48.350 --rc geninfo_unexecuted_blocks=1 00:05:48.350 00:05:48.350 ' 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.350 --rc genhtml_branch_coverage=1 00:05:48.350 --rc genhtml_function_coverage=1 00:05:48.350 --rc genhtml_legend=1 00:05:48.350 --rc geninfo_all_blocks=1 00:05:48.350 --rc geninfo_unexecuted_blocks=1 00:05:48.350 00:05:48.350 ' 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.350 --rc genhtml_branch_coverage=1 00:05:48.350 --rc genhtml_function_coverage=1 00:05:48.350 --rc genhtml_legend=1 00:05:48.350 --rc geninfo_all_blocks=1 00:05:48.350 --rc geninfo_unexecuted_blocks=1 00:05:48.350 00:05:48.350 ' 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.350 --rc genhtml_branch_coverage=1 00:05:48.350 --rc genhtml_function_coverage=1 00:05:48.350 --rc genhtml_legend=1 00:05:48.350 --rc geninfo_all_blocks=1 00:05:48.350 --rc geninfo_unexecuted_blocks=1 00:05:48.350 00:05:48.350 ' 00:05:48.350 12:36:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.350 12:36:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.350 12:36:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.350 12:36:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.350 12:36:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.350 ************************************ 00:05:48.350 START TEST default_locks 00:05:48.350 ************************************ 00:05:48.350 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:48.350 12:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58783 00:05:48.350 12:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58783 00:05:48.350 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58783 ']' 00:05:48.350 12:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.608 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.608 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.608 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.608 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.608 12:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.608 [2024-11-06 12:36:37.117075] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:48.608 [2024-11-06 12:36:37.117255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58783 ] 00:05:48.866 [2024-11-06 12:36:37.292037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.866 [2024-11-06 12:36:37.455630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.802 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.802 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:49.802 12:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58783 00:05:49.802 12:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58783 00:05:49.802 12:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58783 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58783 ']' 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58783 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58783 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:50.371 killing process with pid 58783 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58783' 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58783 00:05:50.371 12:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58783 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58783 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58783 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58783 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58783 ']' 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.898 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58783) - No such process 00:05:52.898 ERROR: process (pid: 58783) is no longer running 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.898 00:05:52.898 real 0m3.975s 00:05:52.898 user 0m3.958s 00:05:52.898 sys 0m0.749s 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.898 12:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.898 ************************************ 00:05:52.898 END TEST default_locks 00:05:52.898 ************************************ 00:05:52.898 12:36:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.898 12:36:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.898 12:36:41 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.898 12:36:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.898 ************************************ 00:05:52.898 START TEST default_locks_via_rpc 00:05:52.898 ************************************ 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58860 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58860 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58860 ']' 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.898 12:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.898 [2024-11-06 12:36:41.141529] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:52.898 [2024-11-06 12:36:41.141731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58860 ] 00:05:52.898 [2024-11-06 12:36:41.311875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.898 [2024-11-06 12:36:41.436054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58860 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58860 00:05:53.831 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58860 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58860 ']' 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58860 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58860 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:54.088 killing process with pid 58860 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58860' 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58860 00:05:54.088 12:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58860 00:05:56.618 00:05:56.618 real 0m3.882s 00:05:56.618 user 0m4.017s 00:05:56.618 sys 0m0.677s 00:05:56.618 12:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.618 12:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.618 ************************************ 00:05:56.618 END TEST default_locks_via_rpc 00:05:56.618 ************************************ 00:05:56.618 12:36:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.618 12:36:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.618 12:36:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.618 12:36:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.618 ************************************ 00:05:56.618 START TEST non_locking_app_on_locked_coremask 00:05:56.618 ************************************ 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58934 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58934 /var/tmp/spdk.sock 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58934 ']' 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.618 12:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.618 [2024-11-06 12:36:45.106952] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:56.618 [2024-11-06 12:36:45.107155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58934 ] 00:05:56.890 [2024-11-06 12:36:45.297400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.890 [2024-11-06 12:36:45.426608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58950 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58950 /var/tmp/spdk2.sock 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58950 ']' 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.825 12:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.083 [2024-11-06 12:36:46.559427] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:05:58.083 [2024-11-06 12:36:46.559596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58950 ] 00:05:58.341 [2024-11-06 12:36:46.753919] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.341 [2024-11-06 12:36:46.754008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.599 [2024-11-06 12:36:47.021462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.133 12:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.133 12:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:01.133 12:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58934 00:06:01.133 12:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58934 00:06:01.133 12:36:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58934 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58934 ']' 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58934 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58934 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:01.391 killing process with pid 58934 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58934' 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58934 00:06:01.391 12:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58934 00:06:06.660 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58950 00:06:06.660 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58950 ']' 00:06:06.660 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58950 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58950 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.661 killing process with pid 58950 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58950' 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58950 00:06:06.661 12:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58950 00:06:08.564 00:06:08.564 real 0m11.871s 00:06:08.564 user 0m12.479s 00:06:08.564 sys 0m1.454s 00:06:08.564 12:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.564 ************************************ 00:06:08.564 END TEST non_locking_app_on_locked_coremask 00:06:08.564 ************************************ 00:06:08.564 12:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.564 12:36:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:08.564 12:36:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.564 12:36:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.564 12:36:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.564 ************************************ 00:06:08.564 START TEST locking_app_on_unlocked_coremask 00:06:08.564 ************************************ 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59107 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59107 /var/tmp/spdk.sock 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59107 ']' 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.564 12:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.564 [2024-11-06 12:36:57.032940] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:08.564 [2024-11-06 12:36:57.033153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59107 ] 00:06:08.822 [2024-11-06 12:36:57.220346] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.822 [2024-11-06 12:36:57.220417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.822 [2024-11-06 12:36:57.349868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59123 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59123 /var/tmp/spdk2.sock 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59123 ']' 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.758 12:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.758 [2024-11-06 12:36:58.318544] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:09.758 [2024-11-06 12:36:58.318742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:06:10.016 [2024-11-06 12:36:58.512791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.274 [2024-11-06 12:36:58.812330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.808 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.808 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:12.808 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59123 00:06:12.808 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59123 00:06:12.808 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59107 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59107 ']' 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59107 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59107 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59107' 00:06:13.374 killing process with pid 59107 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59107 00:06:13.374 12:37:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59107 00:06:18.643 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59123 00:06:18.643 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59123 ']' 00:06:18.643 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59123 00:06:18.643 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:18.643 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.643 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59123 00:06:18.644 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.644 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.644 killing process with pid 59123 00:06:18.644 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59123' 00:06:18.644 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59123 00:06:18.644 12:37:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59123 00:06:20.547 00:06:20.547 real 0m11.803s 00:06:20.547 user 0m12.269s 00:06:20.547 sys 0m1.583s 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.547 ************************************ 00:06:20.547 END TEST locking_app_on_unlocked_coremask 00:06:20.547 ************************************ 00:06:20.547 12:37:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:20.547 12:37:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.547 12:37:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.547 12:37:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.547 ************************************ 00:06:20.547 START TEST locking_app_on_locked_coremask 00:06:20.547 ************************************ 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59271 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59271 /var/tmp/spdk.sock 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59271 ']' 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.547 12:37:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.547 [2024-11-06 12:37:08.874625] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:20.547 [2024-11-06 12:37:08.874808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:06:20.547 [2024-11-06 12:37:09.055164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.547 [2024-11-06 12:37:09.185673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59291 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59291 /var/tmp/spdk2.sock 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59291 /var/tmp/spdk2.sock 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59291 /var/tmp/spdk2.sock 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59291 ']' 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.486 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.745 [2024-11-06 12:37:10.178139] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:21.745 [2024-11-06 12:37:10.178331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:06:21.745 [2024-11-06 12:37:10.379493] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59271 has claimed it. 00:06:21.745 [2024-11-06 12:37:10.379588] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.312 ERROR: process (pid: 59291) is no longer running 00:06:22.312 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59291) - No such process 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59271 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.312 12:37:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59271 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59271 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59271 ']' 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59271 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59271 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.878 killing process with pid 59271 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59271' 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59271 00:06:22.878 12:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59271 00:06:25.426 00:06:25.426 real 0m4.801s 00:06:25.426 user 0m5.099s 00:06:25.426 sys 0m0.872s 00:06:25.426 12:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.426 12:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 ************************************ 00:06:25.426 END TEST locking_app_on_locked_coremask 00:06:25.426 ************************************ 00:06:25.426 12:37:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.426 12:37:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.426 12:37:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.426 12:37:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 ************************************ 00:06:25.426 START TEST locking_overlapped_coremask 00:06:25.426 ************************************ 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59362 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59362 ']' 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.426 12:37:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.426 [2024-11-06 12:37:13.713615] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:25.426 [2024-11-06 12:37:13.713785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59362 ] 00:06:25.426 [2024-11-06 12:37:13.891107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.426 [2024-11-06 12:37:14.028469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.426 [2024-11-06 12:37:14.028530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.426 [2024-11-06 12:37:14.028535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.389 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59380 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59380 /var/tmp/spdk2.sock 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59380 /var/tmp/spdk2.sock 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59380 ']' 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.390 12:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.390 [2024-11-06 12:37:14.998482] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:26.390 [2024-11-06 12:37:14.999106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59380 ] 00:06:26.648 [2024-11-06 12:37:15.193482] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59362 has claimed it. 00:06:26.648 [2024-11-06 12:37:15.193574] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.216 ERROR: process (pid: 59380) is no longer running 00:06:27.216 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59380) - No such process 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59362 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59362 ']' 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59362 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59362 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.216 killing process with pid 59362 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59362' 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59362 00:06:27.216 12:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59362 00:06:29.764 00:06:29.764 real 0m4.345s 00:06:29.764 user 0m11.862s 00:06:29.764 sys 0m0.655s 00:06:29.764 ************************************ 00:06:29.764 END TEST locking_overlapped_coremask 00:06:29.764 ************************************ 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.764 12:37:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.764 12:37:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.764 12:37:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.764 12:37:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.764 ************************************ 00:06:29.764 START TEST locking_overlapped_coremask_via_rpc 00:06:29.764 ************************************ 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59444 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59444 /var/tmp/spdk.sock 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59444 ']' 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.764 12:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.764 [2024-11-06 12:37:18.139908] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:29.764 [2024-11-06 12:37:18.140211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59444 ] 00:06:29.764 [2024-11-06 12:37:18.339887] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.764 [2024-11-06 12:37:18.339993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.023 [2024-11-06 12:37:18.477910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.023 [2024-11-06 12:37:18.477991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.023 [2024-11-06 12:37:18.477994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59462 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59462 /var/tmp/spdk2.sock 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59462 ']' 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.959 12:37:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.959 [2024-11-06 12:37:19.452821] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:30.960 [2024-11-06 12:37:19.453001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59462 ] 00:06:31.218 [2024-11-06 12:37:19.657206] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.218 [2024-11-06 12:37:19.657305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.476 [2024-11-06 12:37:19.931301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.476 [2024-11-06 12:37:19.934402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.476 [2024-11-06 12:37:19.934419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.029 [2024-11-06 12:37:22.237473] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59444 has claimed it. 00:06:34.029 request: 00:06:34.029 { 00:06:34.029 "method": "framework_enable_cpumask_locks", 00:06:34.029 "req_id": 1 00:06:34.029 } 00:06:34.029 Got JSON-RPC error response 00:06:34.029 response: 00:06:34.029 { 00:06:34.029 "code": -32603, 00:06:34.029 "message": "Failed to claim CPU core: 2" 00:06:34.029 } 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59444 /var/tmp/spdk.sock 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59444 ']' 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59462 /var/tmp/spdk2.sock 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59462 ']' 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.029 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.288 00:06:34.288 real 0m4.795s 00:06:34.288 user 0m1.813s 00:06:34.288 sys 0m0.236s 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.288 ************************************ 00:06:34.288 END TEST locking_overlapped_coremask_via_rpc 00:06:34.288 ************************************ 00:06:34.288 12:37:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 12:37:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.288 12:37:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59444 ]] 00:06:34.288 12:37:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59444 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59444 ']' 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59444 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59444 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.288 killing process with pid 59444 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59444' 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59444 00:06:34.288 12:37:22 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59444 00:06:36.852 12:37:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59462 ]] 00:06:36.852 12:37:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59462 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59462 ']' 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59462 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59462 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:36.852 killing process with pid 59462 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59462' 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59462 00:06:36.852 12:37:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59462 00:06:38.754 12:37:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.027 12:37:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.027 12:37:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59444 ]] 00:06:39.027 12:37:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59444 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59444 ']' 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59444 00:06:39.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59444) - No such process 00:06:39.027 Process with pid 59444 is not found 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59444 is not found' 00:06:39.027 12:37:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59462 ]] 00:06:39.027 12:37:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59462 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59462 ']' 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59462 00:06:39.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59462) - No such process 00:06:39.027 Process with pid 59462 is not found 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59462 is not found' 00:06:39.027 12:37:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.027 00:06:39.027 real 0m50.594s 00:06:39.027 user 1m27.339s 00:06:39.027 sys 0m7.518s 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.027 12:37:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.027 ************************************ 00:06:39.027 END TEST cpu_locks 00:06:39.027 ************************************ 00:06:39.027 00:06:39.027 real 1m23.616s 00:06:39.027 user 2m33.647s 00:06:39.027 sys 0m11.631s 00:06:39.027 12:37:27 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.027 ************************************ 00:06:39.027 END TEST event 00:06:39.027 ************************************ 00:06:39.027 12:37:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.027 12:37:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.027 12:37:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.027 12:37:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.027 12:37:27 -- common/autotest_common.sh@10 -- # set +x 00:06:39.027 ************************************ 00:06:39.027 START TEST thread 00:06:39.027 ************************************ 00:06:39.028 12:37:27 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.028 * Looking for test storage... 00:06:39.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:39.028 12:37:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.028 12:37:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.028 12:37:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.287 12:37:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.287 12:37:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.287 12:37:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.287 12:37:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.287 12:37:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.287 12:37:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.287 12:37:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.287 12:37:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.287 12:37:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.287 12:37:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.287 12:37:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.287 12:37:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:39.287 12:37:27 thread -- scripts/common.sh@345 -- # : 1 00:06:39.287 12:37:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.287 12:37:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.287 12:37:27 thread -- scripts/common.sh@365 -- # decimal 1 00:06:39.287 12:37:27 thread -- scripts/common.sh@353 -- # local d=1 00:06:39.287 12:37:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.287 12:37:27 thread -- scripts/common.sh@355 -- # echo 1 00:06:39.287 12:37:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.287 12:37:27 thread -- scripts/common.sh@366 -- # decimal 2 00:06:39.287 12:37:27 thread -- scripts/common.sh@353 -- # local d=2 00:06:39.287 12:37:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.287 12:37:27 thread -- scripts/common.sh@355 -- # echo 2 00:06:39.287 12:37:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.287 12:37:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.287 12:37:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.287 12:37:27 thread -- scripts/common.sh@368 -- # return 0 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.287 --rc genhtml_branch_coverage=1 00:06:39.287 --rc genhtml_function_coverage=1 00:06:39.287 --rc genhtml_legend=1 00:06:39.287 --rc geninfo_all_blocks=1 00:06:39.287 --rc geninfo_unexecuted_blocks=1 00:06:39.287 00:06:39.287 ' 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.287 --rc genhtml_branch_coverage=1 00:06:39.287 --rc genhtml_function_coverage=1 00:06:39.287 --rc genhtml_legend=1 00:06:39.287 --rc geninfo_all_blocks=1 00:06:39.287 --rc geninfo_unexecuted_blocks=1 00:06:39.287 00:06:39.287 ' 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.287 --rc genhtml_branch_coverage=1 00:06:39.287 --rc genhtml_function_coverage=1 00:06:39.287 --rc genhtml_legend=1 00:06:39.287 --rc geninfo_all_blocks=1 00:06:39.287 --rc geninfo_unexecuted_blocks=1 00:06:39.287 00:06:39.287 ' 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.287 --rc genhtml_branch_coverage=1 00:06:39.287 --rc genhtml_function_coverage=1 00:06:39.287 --rc genhtml_legend=1 00:06:39.287 --rc geninfo_all_blocks=1 00:06:39.287 --rc geninfo_unexecuted_blocks=1 00:06:39.287 00:06:39.287 ' 00:06:39.287 12:37:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.287 12:37:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.287 ************************************ 00:06:39.287 START TEST thread_poller_perf 00:06:39.287 ************************************ 00:06:39.287 12:37:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.287 [2024-11-06 12:37:27.790074] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:39.287 [2024-11-06 12:37:27.790238] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:06:39.546 [2024-11-06 12:37:27.968560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.546 [2024-11-06 12:37:28.128990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.546 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.921 [2024-11-06T12:37:29.578Z] ====================================== 00:06:40.921 [2024-11-06T12:37:29.578Z] busy:2216679254 (cyc) 00:06:40.921 [2024-11-06T12:37:29.579Z] total_run_count: 288000 00:06:40.922 [2024-11-06T12:37:29.579Z] tsc_hz: 2200000000 (cyc) 00:06:40.922 [2024-11-06T12:37:29.579Z] ====================================== 00:06:40.922 [2024-11-06T12:37:29.579Z] poller_cost: 7696 (cyc), 3498 (nsec) 00:06:40.922 00:06:40.922 real 0m1.638s 00:06:40.922 user 0m1.413s 00:06:40.922 sys 0m0.116s 00:06:40.922 12:37:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.922 12:37:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.922 ************************************ 00:06:40.922 END TEST thread_poller_perf 00:06:40.922 ************************************ 00:06:40.922 12:37:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.922 12:37:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:40.922 12:37:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.922 12:37:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.922 ************************************ 00:06:40.922 START TEST thread_poller_perf 00:06:40.922 ************************************ 00:06:40.922 12:37:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.922 [2024-11-06 12:37:29.495950] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:40.922 [2024-11-06 12:37:29.496165] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:06:41.200 [2024-11-06 12:37:29.691010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.200 [2024-11-06 12:37:29.849504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.200 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:42.576 [2024-11-06T12:37:31.233Z] ====================================== 00:06:42.576 [2024-11-06T12:37:31.233Z] busy:2205240466 (cyc) 00:06:42.576 [2024-11-06T12:37:31.233Z] total_run_count: 3553000 00:06:42.576 [2024-11-06T12:37:31.233Z] tsc_hz: 2200000000 (cyc) 00:06:42.576 [2024-11-06T12:37:31.233Z] ====================================== 00:06:42.576 [2024-11-06T12:37:31.233Z] poller_cost: 620 (cyc), 281 (nsec) 00:06:42.576 00:06:42.576 real 0m1.644s 00:06:42.576 user 0m1.429s 00:06:42.576 sys 0m0.104s 00:06:42.576 12:37:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.576 12:37:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.576 ************************************ 00:06:42.576 END TEST thread_poller_perf 00:06:42.576 ************************************ 00:06:42.576 12:37:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.576 00:06:42.576 real 0m3.599s 00:06:42.576 user 0m2.995s 00:06:42.576 sys 0m0.386s 00:06:42.576 12:37:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.576 12:37:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.576 ************************************ 00:06:42.576 END TEST thread 00:06:42.576 ************************************ 00:06:42.576 12:37:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:42.576 12:37:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.576 12:37:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:42.576 12:37:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:42.576 12:37:31 -- common/autotest_common.sh@10 -- # set +x 00:06:42.576 ************************************ 00:06:42.576 START TEST app_cmdline 00:06:42.576 ************************************ 00:06:42.576 12:37:31 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.835 * Looking for test storage... 00:06:42.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.835 12:37:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:42.835 12:37:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:42.835 12:37:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.836 12:37:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.836 --rc genhtml_branch_coverage=1 00:06:42.836 --rc genhtml_function_coverage=1 00:06:42.836 --rc genhtml_legend=1 00:06:42.836 --rc geninfo_all_blocks=1 00:06:42.836 --rc geninfo_unexecuted_blocks=1 00:06:42.836 00:06:42.836 ' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.836 --rc genhtml_branch_coverage=1 00:06:42.836 --rc genhtml_function_coverage=1 00:06:42.836 --rc genhtml_legend=1 00:06:42.836 --rc geninfo_all_blocks=1 00:06:42.836 --rc geninfo_unexecuted_blocks=1 00:06:42.836 00:06:42.836 ' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.836 --rc genhtml_branch_coverage=1 00:06:42.836 --rc genhtml_function_coverage=1 00:06:42.836 --rc genhtml_legend=1 00:06:42.836 --rc geninfo_all_blocks=1 00:06:42.836 --rc geninfo_unexecuted_blocks=1 00:06:42.836 00:06:42.836 ' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.836 --rc genhtml_branch_coverage=1 00:06:42.836 --rc genhtml_function_coverage=1 00:06:42.836 --rc genhtml_legend=1 00:06:42.836 --rc geninfo_all_blocks=1 00:06:42.836 --rc geninfo_unexecuted_blocks=1 00:06:42.836 00:06:42.836 ' 00:06:42.836 12:37:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.836 12:37:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59788 00:06:42.836 12:37:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59788 00:06:42.836 12:37:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59788 ']' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.836 12:37:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.095 [2024-11-06 12:37:31.534280] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:43.095 [2024-11-06 12:37:31.534492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:06:43.095 [2024-11-06 12:37:31.728676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.353 [2024-11-06 12:37:31.887474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.289 12:37:32 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.289 12:37:32 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:44.289 12:37:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:44.547 { 00:06:44.547 "version": "SPDK v25.01-pre git sha1 88726e83b", 00:06:44.547 "fields": { 00:06:44.547 "major": 25, 00:06:44.547 "minor": 1, 00:06:44.547 "patch": 0, 00:06:44.547 "suffix": "-pre", 00:06:44.547 "commit": "88726e83b" 00:06:44.547 } 00:06:44.547 } 00:06:44.547 12:37:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:44.547 12:37:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:44.547 12:37:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:44.547 12:37:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:44.547 12:37:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:44.547 12:37:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:44.547 12:37:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.547 12:37:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.548 12:37:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.548 12:37:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:44.548 12:37:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:44.548 12:37:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:44.548 12:37:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.115 request: 00:06:45.116 { 00:06:45.116 "method": "env_dpdk_get_mem_stats", 00:06:45.116 "req_id": 1 00:06:45.116 } 00:06:45.116 Got JSON-RPC error response 00:06:45.116 response: 00:06:45.116 { 00:06:45.116 "code": -32601, 00:06:45.116 "message": "Method not found" 00:06:45.116 } 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.116 12:37:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59788 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59788 ']' 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59788 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59788 00:06:45.116 killing process with pid 59788 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59788' 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@971 -- # kill 59788 00:06:45.116 12:37:33 app_cmdline -- common/autotest_common.sh@976 -- # wait 59788 00:06:47.648 00:06:47.648 real 0m4.636s 00:06:47.648 user 0m5.074s 00:06:47.648 sys 0m0.736s 00:06:47.648 12:37:35 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.648 12:37:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.648 ************************************ 00:06:47.648 END TEST app_cmdline 00:06:47.648 ************************************ 00:06:47.648 12:37:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.648 12:37:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.648 12:37:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.648 12:37:35 -- common/autotest_common.sh@10 -- # set +x 00:06:47.648 ************************************ 00:06:47.648 START TEST version 00:06:47.648 ************************************ 00:06:47.648 12:37:35 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:47.648 * Looking for test storage... 00:06:47.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.648 12:37:35 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.648 12:37:35 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:47.648 12:37:35 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.648 12:37:36 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:47.648 12:37:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.648 12:37:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.649 12:37:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.649 12:37:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.649 12:37:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.649 12:37:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.649 12:37:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.649 12:37:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.649 12:37:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.649 12:37:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.649 12:37:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.649 12:37:36 version -- scripts/common.sh@344 -- # case "$op" in 00:06:47.649 12:37:36 version -- scripts/common.sh@345 -- # : 1 00:06:47.649 12:37:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.649 12:37:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.649 12:37:36 version -- scripts/common.sh@365 -- # decimal 1 00:06:47.649 12:37:36 version -- scripts/common.sh@353 -- # local d=1 00:06:47.649 12:37:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.649 12:37:36 version -- scripts/common.sh@355 -- # echo 1 00:06:47.649 12:37:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.649 12:37:36 version -- scripts/common.sh@366 -- # decimal 2 00:06:47.649 12:37:36 version -- scripts/common.sh@353 -- # local d=2 00:06:47.649 12:37:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.649 12:37:36 version -- scripts/common.sh@355 -- # echo 2 00:06:47.649 12:37:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.649 12:37:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.649 12:37:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.649 12:37:36 version -- scripts/common.sh@368 -- # return 0 00:06:47.649 12:37:36 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.649 12:37:36 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.649 --rc genhtml_branch_coverage=1 00:06:47.649 --rc genhtml_function_coverage=1 00:06:47.649 --rc genhtml_legend=1 00:06:47.649 --rc geninfo_all_blocks=1 00:06:47.649 --rc geninfo_unexecuted_blocks=1 00:06:47.649 00:06:47.649 ' 00:06:47.649 12:37:36 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.649 --rc genhtml_branch_coverage=1 00:06:47.649 --rc genhtml_function_coverage=1 00:06:47.649 --rc genhtml_legend=1 00:06:47.649 --rc geninfo_all_blocks=1 00:06:47.649 --rc geninfo_unexecuted_blocks=1 00:06:47.649 00:06:47.649 ' 00:06:47.649 12:37:36 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.649 --rc genhtml_branch_coverage=1 00:06:47.649 --rc genhtml_function_coverage=1 00:06:47.649 --rc genhtml_legend=1 00:06:47.649 --rc geninfo_all_blocks=1 00:06:47.649 --rc geninfo_unexecuted_blocks=1 00:06:47.649 00:06:47.649 ' 00:06:47.649 12:37:36 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.649 --rc genhtml_branch_coverage=1 00:06:47.649 --rc genhtml_function_coverage=1 00:06:47.649 --rc genhtml_legend=1 00:06:47.649 --rc geninfo_all_blocks=1 00:06:47.649 --rc geninfo_unexecuted_blocks=1 00:06:47.649 00:06:47.649 ' 00:06:47.649 12:37:36 version -- app/version.sh@17 -- # get_header_version major 00:06:47.649 12:37:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # cut -f2 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.649 12:37:36 version -- app/version.sh@17 -- # major=25 00:06:47.649 12:37:36 version -- app/version.sh@18 -- # get_header_version minor 00:06:47.649 12:37:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # cut -f2 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.649 12:37:36 version -- app/version.sh@18 -- # minor=1 00:06:47.649 12:37:36 version -- app/version.sh@19 -- # get_header_version patch 00:06:47.649 12:37:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # cut -f2 00:06:47.649 12:37:36 version -- app/version.sh@19 -- # patch=0 00:06:47.649 12:37:36 version -- app/version.sh@20 -- # get_header_version suffix 00:06:47.649 12:37:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:47.649 12:37:36 version -- app/version.sh@14 -- # cut -f2 00:06:47.649 12:37:36 version -- app/version.sh@20 -- # suffix=-pre 00:06:47.649 12:37:36 version -- app/version.sh@22 -- # version=25.1 00:06:47.649 12:37:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:47.649 12:37:36 version -- app/version.sh@28 -- # version=25.1rc0 00:06:47.649 12:37:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:47.649 12:37:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:47.649 12:37:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:47.649 12:37:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:47.649 00:06:47.649 real 0m0.258s 00:06:47.649 user 0m0.154s 00:06:47.649 sys 0m0.142s 00:06:47.649 12:37:36 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.649 12:37:36 version -- common/autotest_common.sh@10 -- # set +x 00:06:47.649 ************************************ 00:06:47.649 END TEST version 00:06:47.649 ************************************ 00:06:47.649 12:37:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:47.649 12:37:36 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:47.649 12:37:36 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:47.649 12:37:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.649 12:37:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.649 12:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:47.649 ************************************ 00:06:47.649 START TEST bdev_raid 00:06:47.649 ************************************ 00:06:47.649 12:37:36 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:47.649 * Looking for test storage... 00:06:47.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:47.649 12:37:36 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.649 12:37:36 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.649 12:37:36 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.909 12:37:36 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:47.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.909 --rc genhtml_branch_coverage=1 00:06:47.909 --rc genhtml_function_coverage=1 00:06:47.909 --rc genhtml_legend=1 00:06:47.909 --rc geninfo_all_blocks=1 00:06:47.909 --rc geninfo_unexecuted_blocks=1 00:06:47.909 00:06:47.909 ' 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:47.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.909 --rc genhtml_branch_coverage=1 00:06:47.909 --rc genhtml_function_coverage=1 00:06:47.909 --rc genhtml_legend=1 00:06:47.909 --rc geninfo_all_blocks=1 00:06:47.909 --rc geninfo_unexecuted_blocks=1 00:06:47.909 00:06:47.909 ' 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:47.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.909 --rc genhtml_branch_coverage=1 00:06:47.909 --rc genhtml_function_coverage=1 00:06:47.909 --rc genhtml_legend=1 00:06:47.909 --rc geninfo_all_blocks=1 00:06:47.909 --rc geninfo_unexecuted_blocks=1 00:06:47.909 00:06:47.909 ' 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:47.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.909 --rc genhtml_branch_coverage=1 00:06:47.909 --rc genhtml_function_coverage=1 00:06:47.909 --rc genhtml_legend=1 00:06:47.909 --rc geninfo_all_blocks=1 00:06:47.909 --rc geninfo_unexecuted_blocks=1 00:06:47.909 00:06:47.909 ' 00:06:47.909 12:37:36 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:47.909 12:37:36 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:47.909 12:37:36 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:47.909 12:37:36 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:47.909 12:37:36 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:47.909 12:37:36 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:47.909 12:37:36 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.909 12:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.909 ************************************ 00:06:47.909 START TEST raid1_resize_data_offset_test 00:06:47.909 ************************************ 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59980 00:06:47.909 Process raid pid: 59980 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59980' 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59980 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59980 ']' 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.909 12:37:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.909 [2024-11-06 12:37:36.475623] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:47.909 [2024-11-06 12:37:36.475819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.168 [2024-11-06 12:37:36.662466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.168 [2024-11-06 12:37:36.795258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.427 [2024-11-06 12:37:37.008669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.427 [2024-11-06 12:37:37.008725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.995 malloc0 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.995 malloc1 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.995 null0 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.995 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.255 [2024-11-06 12:37:37.653304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:49.255 [2024-11-06 12:37:37.655890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:49.255 [2024-11-06 12:37:37.655990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:49.256 [2024-11-06 12:37:37.656228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.256 [2024-11-06 12:37:37.656259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:49.256 [2024-11-06 12:37:37.656646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.256 [2024-11-06 12:37:37.656871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.256 [2024-11-06 12:37:37.656913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.256 [2024-11-06 12:37:37.657251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.256 [2024-11-06 12:37:37.717559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.256 12:37:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 malloc2 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 [2024-11-06 12:37:38.274754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:49.825 [2024-11-06 12:37:38.291577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.825 [2024-11-06 12:37:38.294098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59980 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59980 ']' 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59980 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59980 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59980' 00:06:49.825 killing process with pid 59980 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59980 00:06:49.825 12:37:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59980 00:06:49.825 [2024-11-06 12:37:38.389166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.825 [2024-11-06 12:37:38.390905] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:49.825 [2024-11-06 12:37:38.391009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.825 [2024-11-06 12:37:38.391042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:49.825 [2024-11-06 12:37:38.423309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.825 [2024-11-06 12:37:38.423818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.825 [2024-11-06 12:37:38.423850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:51.730 [2024-11-06 12:37:40.084363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.665 12:37:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:52.665 00:06:52.665 real 0m4.779s 00:06:52.665 user 0m4.745s 00:06:52.665 sys 0m0.663s 00:06:52.665 12:37:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.665 12:37:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.665 ************************************ 00:06:52.665 END TEST raid1_resize_data_offset_test 00:06:52.665 ************************************ 00:06:52.665 12:37:41 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:52.665 12:37:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.665 12:37:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.665 12:37:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.665 ************************************ 00:06:52.665 START TEST raid0_resize_superblock_test 00:06:52.665 ************************************ 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60067 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.665 Process raid pid: 60067 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60067' 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60067 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60067 ']' 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.665 12:37:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.665 [2024-11-06 12:37:41.282782] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:52.665 [2024-11-06 12:37:41.282971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.924 [2024-11-06 12:37:41.458604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.183 [2024-11-06 12:37:41.589896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.183 [2024-11-06 12:37:41.804474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.183 [2024-11-06 12:37:41.804545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.750 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.750 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:53.750 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:53.750 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.750 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.318 malloc0 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.318 [2024-11-06 12:37:42.830836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.318 [2024-11-06 12:37:42.830955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.318 [2024-11-06 12:37:42.831023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:54.318 [2024-11-06 12:37:42.831046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.318 [2024-11-06 12:37:42.834149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.318 [2024-11-06 12:37:42.834225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.318 pt0 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.318 7daf811e-ab79-48f6-81b4-3dd3e1f97a3d 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.318 5ef8fdce-f3d7-4a83-8a43-7d214d373b8b 00:06:54.318 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 f2b64a02-f097-4e1d-89a1-30b94cf61f26 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 [2024-11-06 12:37:42.988848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5ef8fdce-f3d7-4a83-8a43-7d214d373b8b is claimed 00:06:54.578 [2024-11-06 12:37:42.989066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f2b64a02-f097-4e1d-89a1-30b94cf61f26 is claimed 00:06:54.578 [2024-11-06 12:37:42.989394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.578 [2024-11-06 12:37:42.989444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:54.578 [2024-11-06 12:37:42.989894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:54.578 [2024-11-06 12:37:42.990212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.578 [2024-11-06 12:37:42.990236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.578 [2024-11-06 12:37:42.990456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 12:37:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 [2024-11-06 12:37:43.105145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 [2024-11-06 12:37:43.161149] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.578 [2024-11-06 12:37:43.161236] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5ef8fdce-f3d7-4a83-8a43-7d214d373b8b' was resized: old size 131072, new size 204800 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 [2024-11-06 12:37:43.168967] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.578 [2024-11-06 12:37:43.169000] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f2b64a02-f097-4e1d-89a1-30b94cf61f26' was resized: old size 131072, new size 204800 00:06:54.578 [2024-11-06 12:37:43.169068] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:54.578 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:54.579 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:54.579 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.579 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 [2024-11-06 12:37:43.297258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 [2024-11-06 12:37:43.348941] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:54.838 [2024-11-06 12:37:43.349061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:54.838 [2024-11-06 12:37:43.349086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.838 [2024-11-06 12:37:43.349106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:54.838 [2024-11-06 12:37:43.349292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.838 [2024-11-06 12:37:43.349356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.838 [2024-11-06 12:37:43.349379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 [2024-11-06 12:37:43.356815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.838 [2024-11-06 12:37:43.356896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.838 [2024-11-06 12:37:43.356928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:54.838 [2024-11-06 12:37:43.356946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.838 [2024-11-06 12:37:43.359995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.838 [2024-11-06 12:37:43.360211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.838 pt0 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 [2024-11-06 12:37:43.362675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5ef8fdce-f3d7-4a83-8a43-7d214d373b8b 00:06:54.838 [2024-11-06 12:37:43.362748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5ef8fdce-f3d7-4a83-8a43-7d214d373b8b is claimed 00:06:54.838 [2024-11-06 12:37:43.362882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f2b64a02-f097-4e1d-89a1-30b94cf61f26 00:06:54.838 [2024-11-06 12:37:43.362915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f2b64a02-f097-4e1d-89a1-30b94cf61f26 is claimed 00:06:54.838 [2024-11-06 12:37:43.363093] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f2b64a02-f097-4e1d-89a1-30b94cf61f26 (2) smaller than existing raid bdev Raid (3) 00:06:54.838 [2024-11-06 12:37:43.363137] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5ef8fdce-f3d7-4a83-8a43-7d214d373b8b: File exists 00:06:54.838 [2024-11-06 12:37:43.363329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:54.838 [2024-11-06 12:37:43.363363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:54.838 [2024-11-06 12:37:43.363700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:54.838 [2024-11-06 12:37:43.363951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:54.838 [2024-11-06 12:37:43.363966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:54.838 [2024-11-06 12:37:43.364171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.838 [2024-11-06 12:37:43.381263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60067 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60067 ']' 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60067 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60067 00:06:54.838 killing process with pid 60067 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60067' 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60067 00:06:54.838 [2024-11-06 12:37:43.451737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.838 12:37:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60067 00:06:54.838 [2024-11-06 12:37:43.451856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.838 [2024-11-06 12:37:43.451923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.838 [2024-11-06 12:37:43.451939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:56.254 [2024-11-06 12:37:44.821410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.636 ************************************ 00:06:57.636 END TEST raid0_resize_superblock_test 00:06:57.636 ************************************ 00:06:57.636 12:37:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:57.636 00:06:57.636 real 0m4.719s 00:06:57.636 user 0m5.030s 00:06:57.636 sys 0m0.626s 00:06:57.636 12:37:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.636 12:37:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.636 12:37:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:57.636 12:37:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:57.636 12:37:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.636 12:37:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.636 ************************************ 00:06:57.636 START TEST raid1_resize_superblock_test 00:06:57.636 ************************************ 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60166 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60166' 00:06:57.636 Process raid pid: 60166 00:06:57.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60166 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60166 ']' 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.636 12:37:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.636 [2024-11-06 12:37:46.075503] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:06:57.636 [2024-11-06 12:37:46.076375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.636 [2024-11-06 12:37:46.275140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.911 [2024-11-06 12:37:46.411217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.169 [2024-11-06 12:37:46.653642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.169 [2024-11-06 12:37:46.654010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.736 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.736 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:58.736 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:58.736 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.736 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 malloc0 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 [2024-11-06 12:37:47.743503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.303 [2024-11-06 12:37:47.743583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.303 [2024-11-06 12:37:47.743618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:59.303 [2024-11-06 12:37:47.743637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.303 [2024-11-06 12:37:47.746629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.303 [2024-11-06 12:37:47.746690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.303 pt0 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 75b8d779-8bf4-4025-8886-38e7333a4071 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 f8613ae3-abe2-4ade-bf23-00b2b4f83ecb 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 dd2de168-ad50-4a1f-bea8-dcb7984eb320 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 [2024-11-06 12:37:47.890063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8613ae3-abe2-4ade-bf23-00b2b4f83ecb is claimed 00:06:59.303 [2024-11-06 12:37:47.890174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev dd2de168-ad50-4a1f-bea8-dcb7984eb320 is claimed 00:06:59.303 [2024-11-06 12:37:47.890406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:59.303 [2024-11-06 12:37:47.890433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:59.303 [2024-11-06 12:37:47.890774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:59.303 [2024-11-06 12:37:47.891031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:59.303 [2024-11-06 12:37:47.891048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:59.303 [2024-11-06 12:37:47.891256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.303 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 12:37:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:59.562 [2024-11-06 12:37:48.010394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 [2024-11-06 12:37:48.062399] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.562 [2024-11-06 12:37:48.062435] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f8613ae3-abe2-4ade-bf23-00b2b4f83ecb' was resized: old size 131072, new size 204800 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 [2024-11-06 12:37:48.070239] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.562 [2024-11-06 12:37:48.070268] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'dd2de168-ad50-4a1f-bea8-dcb7984eb320' was resized: old size 131072, new size 204800 00:06:59.562 [2024-11-06 12:37:48.070305] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.562 [2024-11-06 12:37:48.182445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.562 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.822 [2024-11-06 12:37:48.234151] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:59.822 [2024-11-06 12:37:48.234274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:59.822 [2024-11-06 12:37:48.234322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:59.822 [2024-11-06 12:37:48.234512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.822 [2024-11-06 12:37:48.234810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.822 [2024-11-06 12:37:48.234914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.822 [2024-11-06 12:37:48.234938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.822 [2024-11-06 12:37:48.242077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.822 [2024-11-06 12:37:48.242157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.822 [2024-11-06 12:37:48.242210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:59.822 [2024-11-06 12:37:48.242233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.822 [2024-11-06 12:37:48.245269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.822 [2024-11-06 12:37:48.245333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.822 pt0 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.822 [2024-11-06 12:37:48.247808] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f8613ae3-abe2-4ade-bf23-00b2b4f83ecb 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.822 [2024-11-06 12:37:48.247893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f8613ae3-abe2-4ade-bf23-00b2b4f83ecb is claimed 00:06:59.822 [2024-11-06 12:37:48.248057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev dd2de168-ad50-4a1f-bea8-dcb7984eb320 00:06:59.822 [2024-11-06 12:37:48.248103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev dd2de168-ad50-4a1f-bea8-dcb7984eb320 is claimed 00:06:59.822 [2024-11-06 12:37:48.248278] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev dd2de168-ad50-4a1f-bea8-dcb7984eb320 (2) smaller than existing raid bdev Raid (3) 00:06:59.822 [2024-11-06 12:37:48.248310] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f8613ae3-abe2-4ade-bf23-00b2b4f83ecb: File exists 00:06:59.822 [2024-11-06 12:37:48.248365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:59.822 [2024-11-06 12:37:48.248385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:59.822 [2024-11-06 12:37:48.248706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:59.822 [2024-11-06 12:37:48.248919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:59.822 [2024-11-06 12:37:48.248935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:59.822 [2024-11-06 12:37:48.249126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:59.822 [2024-11-06 12:37:48.262481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60166 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60166 ']' 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60166 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60166 00:06:59.822 killing process with pid 60166 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60166' 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60166 00:06:59.822 [2024-11-06 12:37:48.345470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.822 12:37:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60166 00:06:59.822 [2024-11-06 12:37:48.345571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.822 [2024-11-06 12:37:48.345645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.822 [2024-11-06 12:37:48.345659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:01.197 [2024-11-06 12:37:49.684592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.574 12:37:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:02.574 00:07:02.574 real 0m4.868s 00:07:02.574 user 0m5.174s 00:07:02.574 sys 0m0.729s 00:07:02.574 12:37:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.574 ************************************ 00:07:02.574 END TEST raid1_resize_superblock_test 00:07:02.574 ************************************ 00:07:02.574 12:37:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 12:37:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:02.574 12:37:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:02.574 12:37:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:02.574 12:37:50 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:02.574 12:37:50 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:02.574 12:37:50 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:02.574 12:37:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:02.574 12:37:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.574 12:37:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.574 ************************************ 00:07:02.574 START TEST raid_function_test_raid0 00:07:02.574 ************************************ 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:02.574 Process raid pid: 60268 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60268 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60268' 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60268 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60268 ']' 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.574 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.575 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.575 12:37:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:02.575 [2024-11-06 12:37:51.013878] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:02.575 [2024-11-06 12:37:51.014084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.575 [2024-11-06 12:37:51.203876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.833 [2024-11-06 12:37:51.367637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.092 [2024-11-06 12:37:51.627738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.092 [2024-11-06 12:37:51.627820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.659 Base_1 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.659 Base_2 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.659 [2024-11-06 12:37:52.187455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:03.659 [2024-11-06 12:37:52.189966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:03.659 [2024-11-06 12:37:52.190062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:03.659 [2024-11-06 12:37:52.190083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.659 [2024-11-06 12:37:52.190435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.659 [2024-11-06 12:37:52.190619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:03.659 [2024-11-06 12:37:52.190634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:03.659 [2024-11-06 12:37:52.190816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.659 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:03.660 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:03.919 [2024-11-06 12:37:52.511682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:03.919 /dev/nbd0 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.919 1+0 records in 00:07:03.919 1+0 records out 00:07:03.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846957 s, 4.8 MB/s 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.919 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:04.178 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:04.178 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.178 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.437 { 00:07:04.437 "nbd_device": "/dev/nbd0", 00:07:04.437 "bdev_name": "raid" 00:07:04.437 } 00:07:04.437 ]' 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.437 { 00:07:04.437 "nbd_device": "/dev/nbd0", 00:07:04.437 "bdev_name": "raid" 00:07:04.437 } 00:07:04.437 ]' 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:04.437 4096+0 records in 00:07:04.437 4096+0 records out 00:07:04.437 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0333372 s, 62.9 MB/s 00:07:04.437 12:37:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:04.696 4096+0 records in 00:07:04.696 4096+0 records out 00:07:04.696 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.352594 s, 5.9 MB/s 00:07:04.696 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:04.696 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:04.954 128+0 records in 00:07:04.954 128+0 records out 00:07:04.954 65536 bytes (66 kB, 64 KiB) copied, 0.00062382 s, 105 MB/s 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.954 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:04.955 2035+0 records in 00:07:04.955 2035+0 records out 00:07:04.955 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00723194 s, 144 MB/s 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:04.955 456+0 records in 00:07:04.955 456+0 records out 00:07:04.955 233472 bytes (233 kB, 228 KiB) copied, 0.00164985 s, 142 MB/s 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.955 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:05.213 [2024-11-06 12:37:53.762133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.213 12:37:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.471 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.471 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.471 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60268 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60268 ']' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60268 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60268 00:07:05.729 killing process with pid 60268 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60268' 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60268 00:07:05.729 [2024-11-06 12:37:54.186813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.729 12:37:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60268 00:07:05.729 [2024-11-06 12:37:54.186947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.729 [2024-11-06 12:37:54.187045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.729 [2024-11-06 12:37:54.187068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:05.729 [2024-11-06 12:37:54.378350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.105 ************************************ 00:07:07.105 END TEST raid_function_test_raid0 00:07:07.105 ************************************ 00:07:07.105 12:37:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:07.105 00:07:07.105 real 0m4.518s 00:07:07.105 user 0m5.659s 00:07:07.105 sys 0m1.009s 00:07:07.105 12:37:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.105 12:37:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:07.105 12:37:55 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:07.105 12:37:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:07.105 12:37:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.105 12:37:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.105 ************************************ 00:07:07.105 START TEST raid_function_test_concat 00:07:07.105 ************************************ 00:07:07.105 Process raid pid: 60408 00:07:07.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60408 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60408' 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60408 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60408 ']' 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.105 12:37:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.105 [2024-11-06 12:37:55.589419] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:07.105 [2024-11-06 12:37:55.589943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.364 [2024-11-06 12:37:55.783226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.364 [2024-11-06 12:37:55.946692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.622 [2024-11-06 12:37:56.185412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.622 [2024-11-06 12:37:56.185665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 Base_1 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 Base_2 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 [2024-11-06 12:37:56.702561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:08.205 [2024-11-06 12:37:56.705110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:08.205 [2024-11-06 12:37:56.705225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:08.205 [2024-11-06 12:37:56.705248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:08.205 [2024-11-06 12:37:56.705563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.205 [2024-11-06 12:37:56.705755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:08.205 [2024-11-06 12:37:56.705771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:08.205 [2024-11-06 12:37:56.705949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:08.205 12:37:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:08.464 [2024-11-06 12:37:57.046732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:08.464 /dev/nbd0 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.464 1+0 records in 00:07:08.464 1+0 records out 00:07:08.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418336 s, 9.8 MB/s 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.464 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:08.722 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.722 { 00:07:08.722 "nbd_device": "/dev/nbd0", 00:07:08.722 "bdev_name": "raid" 00:07:08.722 } 00:07:08.722 ]' 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.982 { 00:07:08.982 "nbd_device": "/dev/nbd0", 00:07:08.982 "bdev_name": "raid" 00:07:08.982 } 00:07:08.982 ]' 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:08.982 4096+0 records in 00:07:08.982 4096+0 records out 00:07:08.982 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0262244 s, 80.0 MB/s 00:07:08.982 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:09.241 4096+0 records in 00:07:09.241 4096+0 records out 00:07:09.241 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.351788 s, 6.0 MB/s 00:07:09.241 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:09.241 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:09.241 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:09.242 128+0 records in 00:07:09.242 128+0 records out 00:07:09.242 65536 bytes (66 kB, 64 KiB) copied, 0.00109987 s, 59.6 MB/s 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:09.242 2035+0 records in 00:07:09.242 2035+0 records out 00:07:09.242 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0134653 s, 77.4 MB/s 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:09.242 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:09.501 456+0 records in 00:07:09.501 456+0 records out 00:07:09.501 233472 bytes (233 kB, 228 KiB) copied, 0.00297056 s, 78.6 MB/s 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.501 12:37:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:09.760 [2024-11-06 12:37:58.255755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.760 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60408 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60408 ']' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60408 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60408 00:07:10.018 killing process with pid 60408 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60408' 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60408 00:07:10.018 [2024-11-06 12:37:58.643987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.018 12:37:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60408 00:07:10.018 [2024-11-06 12:37:58.644105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.018 [2024-11-06 12:37:58.644186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.018 [2024-11-06 12:37:58.644223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:10.277 [2024-11-06 12:37:58.830005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.655 12:37:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:11.655 00:07:11.655 real 0m4.391s 00:07:11.655 user 0m5.436s 00:07:11.655 sys 0m1.005s 00:07:11.655 12:37:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.655 12:37:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 ************************************ 00:07:11.655 END TEST raid_function_test_concat 00:07:11.655 ************************************ 00:07:11.655 12:37:59 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:11.655 12:37:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:11.655 12:37:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.655 12:37:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 ************************************ 00:07:11.655 START TEST raid0_resize_test 00:07:11.655 ************************************ 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60536 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.655 Process raid pid: 60536 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60536' 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60536 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60536 ']' 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.655 12:37:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 [2024-11-06 12:38:00.028228] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:11.655 [2024-11-06 12:38:00.028395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.655 [2024-11-06 12:38:00.221155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.913 [2024-11-06 12:38:00.380381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.171 [2024-11-06 12:38:00.606642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.171 [2024-11-06 12:38:00.606694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.430 Base_1 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.430 Base_2 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.430 [2024-11-06 12:38:01.050550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:12.430 [2024-11-06 12:38:01.053013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:12.430 [2024-11-06 12:38:01.053086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.430 [2024-11-06 12:38:01.053106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.430 [2024-11-06 12:38:01.053473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:12.430 [2024-11-06 12:38:01.053679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.430 [2024-11-06 12:38:01.053701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:12.430 [2024-11-06 12:38:01.053852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.430 [2024-11-06 12:38:01.058495] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.430 [2024-11-06 12:38:01.058575] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:12.430 true 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:12.430 [2024-11-06 12:38:01.070716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.430 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.689 [2024-11-06 12:38:01.126554] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.689 [2024-11-06 12:38:01.126588] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:12.689 [2024-11-06 12:38:01.126624] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:12.689 true 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:12.689 [2024-11-06 12:38:01.138765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:12.689 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60536 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60536 ']' 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60536 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60536 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.690 killing process with pid 60536 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60536' 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60536 00:07:12.690 [2024-11-06 12:38:01.225261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.690 12:38:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60536 00:07:12.690 [2024-11-06 12:38:01.225377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.690 [2024-11-06 12:38:01.225443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.690 [2024-11-06 12:38:01.225457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:12.690 [2024-11-06 12:38:01.241838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.065 12:38:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:14.065 00:07:14.065 real 0m2.399s 00:07:14.065 user 0m2.686s 00:07:14.065 sys 0m0.374s 00:07:14.065 12:38:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.065 12:38:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.065 ************************************ 00:07:14.065 END TEST raid0_resize_test 00:07:14.065 ************************************ 00:07:14.065 12:38:02 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:14.065 12:38:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:14.065 12:38:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.065 12:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.065 ************************************ 00:07:14.065 START TEST raid1_resize_test 00:07:14.065 ************************************ 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60598 00:07:14.065 Process raid pid: 60598 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60598' 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60598 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60598 ']' 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.065 12:38:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.065 [2024-11-06 12:38:02.490387] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:14.065 [2024-11-06 12:38:02.490570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.065 [2024-11-06 12:38:02.688700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.324 [2024-11-06 12:38:02.851763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.582 [2024-11-06 12:38:03.067989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.582 [2024-11-06 12:38:03.068050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 Base_1 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 Base_2 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 [2024-11-06 12:38:03.472648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:14.841 [2024-11-06 12:38:03.475087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:14.841 [2024-11-06 12:38:03.475185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:14.841 [2024-11-06 12:38:03.475219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:14.841 [2024-11-06 12:38:03.475546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.841 [2024-11-06 12:38:03.475721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:14.841 [2024-11-06 12:38:03.475737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:14.841 [2024-11-06 12:38:03.475916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 [2024-11-06 12:38:03.480649] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.841 [2024-11-06 12:38:03.480705] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:14.841 true 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.841 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.841 [2024-11-06 12:38:03.492880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.100 [2024-11-06 12:38:03.544688] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.100 [2024-11-06 12:38:03.544722] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:15.100 [2024-11-06 12:38:03.544779] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:15.100 true 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:15.100 [2024-11-06 12:38:03.556895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60598 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60598 ']' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60598 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60598 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:15.100 killing process with pid 60598 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60598' 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60598 00:07:15.100 [2024-11-06 12:38:03.640134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.100 12:38:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60598 00:07:15.100 [2024-11-06 12:38:03.640258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.100 [2024-11-06 12:38:03.640854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.100 [2024-11-06 12:38:03.640888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:15.100 [2024-11-06 12:38:03.655723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.036 12:38:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:16.036 00:07:16.036 real 0m2.295s 00:07:16.036 user 0m2.541s 00:07:16.036 sys 0m0.387s 00:07:16.036 12:38:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.036 ************************************ 00:07:16.036 12:38:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.036 END TEST raid1_resize_test 00:07:16.036 ************************************ 00:07:16.295 12:38:04 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:16.295 12:38:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:16.295 12:38:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:16.295 12:38:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:16.295 12:38:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.295 12:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.295 ************************************ 00:07:16.295 START TEST raid_state_function_test 00:07:16.295 ************************************ 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60655 00:07:16.295 Process raid pid: 60655 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60655' 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60655 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60655 ']' 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.295 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.296 12:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.296 [2024-11-06 12:38:04.843010] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:16.296 [2024-11-06 12:38:04.843246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.554 [2024-11-06 12:38:05.029299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.554 [2024-11-06 12:38:05.158332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.813 [2024-11-06 12:38:05.370956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.813 [2024-11-06 12:38:05.371016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.415 [2024-11-06 12:38:05.776009] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.415 [2024-11-06 12:38:05.776095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.415 [2024-11-06 12:38:05.776112] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.415 [2024-11-06 12:38:05.776129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.415 "name": "Existed_Raid", 00:07:17.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.415 "strip_size_kb": 64, 00:07:17.415 "state": "configuring", 00:07:17.415 "raid_level": "raid0", 00:07:17.415 "superblock": false, 00:07:17.415 "num_base_bdevs": 2, 00:07:17.415 "num_base_bdevs_discovered": 0, 00:07:17.415 "num_base_bdevs_operational": 2, 00:07:17.415 "base_bdevs_list": [ 00:07:17.415 { 00:07:17.415 "name": "BaseBdev1", 00:07:17.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.415 "is_configured": false, 00:07:17.415 "data_offset": 0, 00:07:17.415 "data_size": 0 00:07:17.415 }, 00:07:17.415 { 00:07:17.415 "name": "BaseBdev2", 00:07:17.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.415 "is_configured": false, 00:07:17.415 "data_offset": 0, 00:07:17.415 "data_size": 0 00:07:17.415 } 00:07:17.415 ] 00:07:17.415 }' 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.415 12:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 [2024-11-06 12:38:06.340113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.983 [2024-11-06 12:38:06.340164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 [2024-11-06 12:38:06.348079] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.983 [2024-11-06 12:38:06.348126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.983 [2024-11-06 12:38:06.348141] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.983 [2024-11-06 12:38:06.348160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 [2024-11-06 12:38:06.392682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.983 BaseBdev1 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 [ 00:07:17.983 { 00:07:17.983 "name": "BaseBdev1", 00:07:17.983 "aliases": [ 00:07:17.983 "16898fa1-c2e6-474a-abde-c6b83a075f4d" 00:07:17.983 ], 00:07:17.983 "product_name": "Malloc disk", 00:07:17.983 "block_size": 512, 00:07:17.983 "num_blocks": 65536, 00:07:17.983 "uuid": "16898fa1-c2e6-474a-abde-c6b83a075f4d", 00:07:17.983 "assigned_rate_limits": { 00:07:17.983 "rw_ios_per_sec": 0, 00:07:17.983 "rw_mbytes_per_sec": 0, 00:07:17.983 "r_mbytes_per_sec": 0, 00:07:17.983 "w_mbytes_per_sec": 0 00:07:17.983 }, 00:07:17.983 "claimed": true, 00:07:17.983 "claim_type": "exclusive_write", 00:07:17.983 "zoned": false, 00:07:17.983 "supported_io_types": { 00:07:17.983 "read": true, 00:07:17.983 "write": true, 00:07:17.983 "unmap": true, 00:07:17.983 "flush": true, 00:07:17.983 "reset": true, 00:07:17.983 "nvme_admin": false, 00:07:17.983 "nvme_io": false, 00:07:17.983 "nvme_io_md": false, 00:07:17.983 "write_zeroes": true, 00:07:17.983 "zcopy": true, 00:07:17.983 "get_zone_info": false, 00:07:17.983 "zone_management": false, 00:07:17.983 "zone_append": false, 00:07:17.983 "compare": false, 00:07:17.983 "compare_and_write": false, 00:07:17.983 "abort": true, 00:07:17.983 "seek_hole": false, 00:07:17.983 "seek_data": false, 00:07:17.983 "copy": true, 00:07:17.983 "nvme_iov_md": false 00:07:17.983 }, 00:07:17.983 "memory_domains": [ 00:07:17.983 { 00:07:17.983 "dma_device_id": "system", 00:07:17.983 "dma_device_type": 1 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.983 "dma_device_type": 2 00:07:17.983 } 00:07:17.983 ], 00:07:17.983 "driver_specific": {} 00:07:17.983 } 00:07:17.983 ] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.983 "name": "Existed_Raid", 00:07:17.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.983 "strip_size_kb": 64, 00:07:17.983 "state": "configuring", 00:07:17.983 "raid_level": "raid0", 00:07:17.983 "superblock": false, 00:07:17.983 "num_base_bdevs": 2, 00:07:17.983 "num_base_bdevs_discovered": 1, 00:07:17.983 "num_base_bdevs_operational": 2, 00:07:17.983 "base_bdevs_list": [ 00:07:17.983 { 00:07:17.983 "name": "BaseBdev1", 00:07:17.983 "uuid": "16898fa1-c2e6-474a-abde-c6b83a075f4d", 00:07:17.983 "is_configured": true, 00:07:17.983 "data_offset": 0, 00:07:17.983 "data_size": 65536 00:07:17.983 }, 00:07:17.983 { 00:07:17.983 "name": "BaseBdev2", 00:07:17.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.983 "is_configured": false, 00:07:17.983 "data_offset": 0, 00:07:17.983 "data_size": 0 00:07:17.983 } 00:07:17.983 ] 00:07:17.983 }' 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.983 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.551 [2024-11-06 12:38:06.908882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.551 [2024-11-06 12:38:06.908953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.551 [2024-11-06 12:38:06.916884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.551 [2024-11-06 12:38:06.919254] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.551 [2024-11-06 12:38:06.919306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.551 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.552 "name": "Existed_Raid", 00:07:18.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.552 "strip_size_kb": 64, 00:07:18.552 "state": "configuring", 00:07:18.552 "raid_level": "raid0", 00:07:18.552 "superblock": false, 00:07:18.552 "num_base_bdevs": 2, 00:07:18.552 "num_base_bdevs_discovered": 1, 00:07:18.552 "num_base_bdevs_operational": 2, 00:07:18.552 "base_bdevs_list": [ 00:07:18.552 { 00:07:18.552 "name": "BaseBdev1", 00:07:18.552 "uuid": "16898fa1-c2e6-474a-abde-c6b83a075f4d", 00:07:18.552 "is_configured": true, 00:07:18.552 "data_offset": 0, 00:07:18.552 "data_size": 65536 00:07:18.552 }, 00:07:18.552 { 00:07:18.552 "name": "BaseBdev2", 00:07:18.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.552 "is_configured": false, 00:07:18.552 "data_offset": 0, 00:07:18.552 "data_size": 0 00:07:18.552 } 00:07:18.552 ] 00:07:18.552 }' 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.552 12:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.811 [2024-11-06 12:38:07.443057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.811 [2024-11-06 12:38:07.443126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.811 [2024-11-06 12:38:07.443140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:18.811 [2024-11-06 12:38:07.443526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:18.811 [2024-11-06 12:38:07.443740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.811 [2024-11-06 12:38:07.443773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:18.811 [2024-11-06 12:38:07.444087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.811 BaseBdev2 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.811 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.811 [ 00:07:18.811 { 00:07:18.811 "name": "BaseBdev2", 00:07:18.811 "aliases": [ 00:07:18.811 "35f41595-8e52-4335-ad48-6c15347efe6c" 00:07:18.811 ], 00:07:18.811 "product_name": "Malloc disk", 00:07:18.811 "block_size": 512, 00:07:18.811 "num_blocks": 65536, 00:07:18.811 "uuid": "35f41595-8e52-4335-ad48-6c15347efe6c", 00:07:18.811 "assigned_rate_limits": { 00:07:18.811 "rw_ios_per_sec": 0, 00:07:18.811 "rw_mbytes_per_sec": 0, 00:07:18.811 "r_mbytes_per_sec": 0, 00:07:18.811 "w_mbytes_per_sec": 0 00:07:19.070 }, 00:07:19.070 "claimed": true, 00:07:19.070 "claim_type": "exclusive_write", 00:07:19.070 "zoned": false, 00:07:19.070 "supported_io_types": { 00:07:19.070 "read": true, 00:07:19.070 "write": true, 00:07:19.070 "unmap": true, 00:07:19.070 "flush": true, 00:07:19.070 "reset": true, 00:07:19.070 "nvme_admin": false, 00:07:19.070 "nvme_io": false, 00:07:19.070 "nvme_io_md": false, 00:07:19.070 "write_zeroes": true, 00:07:19.070 "zcopy": true, 00:07:19.070 "get_zone_info": false, 00:07:19.070 "zone_management": false, 00:07:19.070 "zone_append": false, 00:07:19.070 "compare": false, 00:07:19.070 "compare_and_write": false, 00:07:19.070 "abort": true, 00:07:19.070 "seek_hole": false, 00:07:19.070 "seek_data": false, 00:07:19.070 "copy": true, 00:07:19.070 "nvme_iov_md": false 00:07:19.070 }, 00:07:19.070 "memory_domains": [ 00:07:19.070 { 00:07:19.070 "dma_device_id": "system", 00:07:19.070 "dma_device_type": 1 00:07:19.070 }, 00:07:19.070 { 00:07:19.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.070 "dma_device_type": 2 00:07:19.070 } 00:07:19.070 ], 00:07:19.070 "driver_specific": {} 00:07:19.070 } 00:07:19.070 ] 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.070 "name": "Existed_Raid", 00:07:19.070 "uuid": "cb45d053-8c1e-41d4-a0c5-9358230df18e", 00:07:19.070 "strip_size_kb": 64, 00:07:19.070 "state": "online", 00:07:19.070 "raid_level": "raid0", 00:07:19.070 "superblock": false, 00:07:19.070 "num_base_bdevs": 2, 00:07:19.070 "num_base_bdevs_discovered": 2, 00:07:19.070 "num_base_bdevs_operational": 2, 00:07:19.070 "base_bdevs_list": [ 00:07:19.070 { 00:07:19.070 "name": "BaseBdev1", 00:07:19.070 "uuid": "16898fa1-c2e6-474a-abde-c6b83a075f4d", 00:07:19.070 "is_configured": true, 00:07:19.070 "data_offset": 0, 00:07:19.070 "data_size": 65536 00:07:19.070 }, 00:07:19.070 { 00:07:19.070 "name": "BaseBdev2", 00:07:19.070 "uuid": "35f41595-8e52-4335-ad48-6c15347efe6c", 00:07:19.070 "is_configured": true, 00:07:19.070 "data_offset": 0, 00:07:19.070 "data_size": 65536 00:07:19.070 } 00:07:19.070 ] 00:07:19.070 }' 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.070 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 [2024-11-06 12:38:07.971607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.365 12:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.365 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.365 "name": "Existed_Raid", 00:07:19.365 "aliases": [ 00:07:19.365 "cb45d053-8c1e-41d4-a0c5-9358230df18e" 00:07:19.365 ], 00:07:19.365 "product_name": "Raid Volume", 00:07:19.365 "block_size": 512, 00:07:19.365 "num_blocks": 131072, 00:07:19.365 "uuid": "cb45d053-8c1e-41d4-a0c5-9358230df18e", 00:07:19.365 "assigned_rate_limits": { 00:07:19.365 "rw_ios_per_sec": 0, 00:07:19.365 "rw_mbytes_per_sec": 0, 00:07:19.365 "r_mbytes_per_sec": 0, 00:07:19.365 "w_mbytes_per_sec": 0 00:07:19.365 }, 00:07:19.365 "claimed": false, 00:07:19.365 "zoned": false, 00:07:19.365 "supported_io_types": { 00:07:19.365 "read": true, 00:07:19.365 "write": true, 00:07:19.365 "unmap": true, 00:07:19.365 "flush": true, 00:07:19.365 "reset": true, 00:07:19.365 "nvme_admin": false, 00:07:19.365 "nvme_io": false, 00:07:19.365 "nvme_io_md": false, 00:07:19.365 "write_zeroes": true, 00:07:19.365 "zcopy": false, 00:07:19.365 "get_zone_info": false, 00:07:19.365 "zone_management": false, 00:07:19.365 "zone_append": false, 00:07:19.366 "compare": false, 00:07:19.366 "compare_and_write": false, 00:07:19.366 "abort": false, 00:07:19.366 "seek_hole": false, 00:07:19.366 "seek_data": false, 00:07:19.366 "copy": false, 00:07:19.366 "nvme_iov_md": false 00:07:19.366 }, 00:07:19.366 "memory_domains": [ 00:07:19.366 { 00:07:19.366 "dma_device_id": "system", 00:07:19.366 "dma_device_type": 1 00:07:19.366 }, 00:07:19.366 { 00:07:19.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.366 "dma_device_type": 2 00:07:19.366 }, 00:07:19.366 { 00:07:19.366 "dma_device_id": "system", 00:07:19.366 "dma_device_type": 1 00:07:19.366 }, 00:07:19.366 { 00:07:19.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.366 "dma_device_type": 2 00:07:19.366 } 00:07:19.366 ], 00:07:19.366 "driver_specific": { 00:07:19.366 "raid": { 00:07:19.366 "uuid": "cb45d053-8c1e-41d4-a0c5-9358230df18e", 00:07:19.366 "strip_size_kb": 64, 00:07:19.366 "state": "online", 00:07:19.366 "raid_level": "raid0", 00:07:19.366 "superblock": false, 00:07:19.366 "num_base_bdevs": 2, 00:07:19.366 "num_base_bdevs_discovered": 2, 00:07:19.366 "num_base_bdevs_operational": 2, 00:07:19.366 "base_bdevs_list": [ 00:07:19.366 { 00:07:19.366 "name": "BaseBdev1", 00:07:19.366 "uuid": "16898fa1-c2e6-474a-abde-c6b83a075f4d", 00:07:19.366 "is_configured": true, 00:07:19.366 "data_offset": 0, 00:07:19.366 "data_size": 65536 00:07:19.366 }, 00:07:19.366 { 00:07:19.366 "name": "BaseBdev2", 00:07:19.366 "uuid": "35f41595-8e52-4335-ad48-6c15347efe6c", 00:07:19.366 "is_configured": true, 00:07:19.366 "data_offset": 0, 00:07:19.366 "data_size": 65536 00:07:19.366 } 00:07:19.366 ] 00:07:19.366 } 00:07:19.366 } 00:07:19.366 }' 00:07:19.366 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:19.625 BaseBdev2' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.625 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.625 [2024-11-06 12:38:08.211350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:19.625 [2024-11-06 12:38:08.211395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.625 [2024-11-06 12:38:08.211471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.885 "name": "Existed_Raid", 00:07:19.885 "uuid": "cb45d053-8c1e-41d4-a0c5-9358230df18e", 00:07:19.885 "strip_size_kb": 64, 00:07:19.885 "state": "offline", 00:07:19.885 "raid_level": "raid0", 00:07:19.885 "superblock": false, 00:07:19.885 "num_base_bdevs": 2, 00:07:19.885 "num_base_bdevs_discovered": 1, 00:07:19.885 "num_base_bdevs_operational": 1, 00:07:19.885 "base_bdevs_list": [ 00:07:19.885 { 00:07:19.885 "name": null, 00:07:19.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.885 "is_configured": false, 00:07:19.885 "data_offset": 0, 00:07:19.885 "data_size": 65536 00:07:19.885 }, 00:07:19.885 { 00:07:19.885 "name": "BaseBdev2", 00:07:19.885 "uuid": "35f41595-8e52-4335-ad48-6c15347efe6c", 00:07:19.885 "is_configured": true, 00:07:19.885 "data_offset": 0, 00:07:19.885 "data_size": 65536 00:07:19.885 } 00:07:19.885 ] 00:07:19.885 }' 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.885 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.452 [2024-11-06 12:38:08.864783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:20.452 [2024-11-06 12:38:08.864850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.452 12:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60655 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60655 ']' 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60655 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60655 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.452 killing process with pid 60655 00:07:20.452 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.453 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60655' 00:07:20.453 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60655 00:07:20.453 [2024-11-06 12:38:09.045276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.453 12:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60655 00:07:20.453 [2024-11-06 12:38:09.059868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.830 00:07:21.830 real 0m5.369s 00:07:21.830 user 0m8.124s 00:07:21.830 sys 0m0.740s 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.830 ************************************ 00:07:21.830 END TEST raid_state_function_test 00:07:21.830 ************************************ 00:07:21.830 12:38:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:21.830 12:38:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:21.830 12:38:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.830 12:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.830 ************************************ 00:07:21.830 START TEST raid_state_function_test_sb 00:07:21.830 ************************************ 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60908 00:07:21.830 Process raid pid: 60908 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60908' 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60908 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60908 ']' 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.830 12:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.830 [2024-11-06 12:38:10.234535] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:21.830 [2024-11-06 12:38:10.234685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.830 [2024-11-06 12:38:10.408668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.088 [2024-11-06 12:38:10.540650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.346 [2024-11-06 12:38:10.779779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.346 [2024-11-06 12:38:10.779842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.605 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.605 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:22.605 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.605 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.605 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.605 [2024-11-06 12:38:11.258670] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.605 [2024-11-06 12:38:11.258733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.605 [2024-11-06 12:38:11.258749] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.605 [2024-11-06 12:38:11.258766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.913 "name": "Existed_Raid", 00:07:22.913 "uuid": "0668e7ec-8b09-4b0f-be23-507f02070982", 00:07:22.913 "strip_size_kb": 64, 00:07:22.913 "state": "configuring", 00:07:22.913 "raid_level": "raid0", 00:07:22.913 "superblock": true, 00:07:22.913 "num_base_bdevs": 2, 00:07:22.913 "num_base_bdevs_discovered": 0, 00:07:22.913 "num_base_bdevs_operational": 2, 00:07:22.913 "base_bdevs_list": [ 00:07:22.913 { 00:07:22.913 "name": "BaseBdev1", 00:07:22.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.913 "is_configured": false, 00:07:22.913 "data_offset": 0, 00:07:22.913 "data_size": 0 00:07:22.913 }, 00:07:22.913 { 00:07:22.913 "name": "BaseBdev2", 00:07:22.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.913 "is_configured": false, 00:07:22.913 "data_offset": 0, 00:07:22.913 "data_size": 0 00:07:22.913 } 00:07:22.913 ] 00:07:22.913 }' 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.913 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.171 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.172 [2024-11-06 12:38:11.778740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.172 [2024-11-06 12:38:11.778793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.172 [2024-11-06 12:38:11.786723] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.172 [2024-11-06 12:38:11.786770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.172 [2024-11-06 12:38:11.786784] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.172 [2024-11-06 12:38:11.786802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.172 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.430 [2024-11-06 12:38:11.831262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.430 BaseBdev1 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.430 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.430 [ 00:07:23.430 { 00:07:23.430 "name": "BaseBdev1", 00:07:23.430 "aliases": [ 00:07:23.430 "e56f5292-cf0d-43f1-b371-16d8d7be1fc8" 00:07:23.430 ], 00:07:23.430 "product_name": "Malloc disk", 00:07:23.430 "block_size": 512, 00:07:23.430 "num_blocks": 65536, 00:07:23.430 "uuid": "e56f5292-cf0d-43f1-b371-16d8d7be1fc8", 00:07:23.430 "assigned_rate_limits": { 00:07:23.430 "rw_ios_per_sec": 0, 00:07:23.430 "rw_mbytes_per_sec": 0, 00:07:23.430 "r_mbytes_per_sec": 0, 00:07:23.430 "w_mbytes_per_sec": 0 00:07:23.430 }, 00:07:23.430 "claimed": true, 00:07:23.430 "claim_type": "exclusive_write", 00:07:23.430 "zoned": false, 00:07:23.430 "supported_io_types": { 00:07:23.430 "read": true, 00:07:23.430 "write": true, 00:07:23.430 "unmap": true, 00:07:23.430 "flush": true, 00:07:23.430 "reset": true, 00:07:23.430 "nvme_admin": false, 00:07:23.430 "nvme_io": false, 00:07:23.430 "nvme_io_md": false, 00:07:23.430 "write_zeroes": true, 00:07:23.430 "zcopy": true, 00:07:23.430 "get_zone_info": false, 00:07:23.431 "zone_management": false, 00:07:23.431 "zone_append": false, 00:07:23.431 "compare": false, 00:07:23.431 "compare_and_write": false, 00:07:23.431 "abort": true, 00:07:23.431 "seek_hole": false, 00:07:23.431 "seek_data": false, 00:07:23.431 "copy": true, 00:07:23.431 "nvme_iov_md": false 00:07:23.431 }, 00:07:23.431 "memory_domains": [ 00:07:23.431 { 00:07:23.431 "dma_device_id": "system", 00:07:23.431 "dma_device_type": 1 00:07:23.431 }, 00:07:23.431 { 00:07:23.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.431 "dma_device_type": 2 00:07:23.431 } 00:07:23.431 ], 00:07:23.431 "driver_specific": {} 00:07:23.431 } 00:07:23.431 ] 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.431 "name": "Existed_Raid", 00:07:23.431 "uuid": "db66fb35-4795-4029-9840-f06dfe12a0f8", 00:07:23.431 "strip_size_kb": 64, 00:07:23.431 "state": "configuring", 00:07:23.431 "raid_level": "raid0", 00:07:23.431 "superblock": true, 00:07:23.431 "num_base_bdevs": 2, 00:07:23.431 "num_base_bdevs_discovered": 1, 00:07:23.431 "num_base_bdevs_operational": 2, 00:07:23.431 "base_bdevs_list": [ 00:07:23.431 { 00:07:23.431 "name": "BaseBdev1", 00:07:23.431 "uuid": "e56f5292-cf0d-43f1-b371-16d8d7be1fc8", 00:07:23.431 "is_configured": true, 00:07:23.431 "data_offset": 2048, 00:07:23.431 "data_size": 63488 00:07:23.431 }, 00:07:23.431 { 00:07:23.431 "name": "BaseBdev2", 00:07:23.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.431 "is_configured": false, 00:07:23.431 "data_offset": 0, 00:07:23.431 "data_size": 0 00:07:23.431 } 00:07:23.431 ] 00:07:23.431 }' 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.431 12:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.998 [2024-11-06 12:38:12.371461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.998 [2024-11-06 12:38:12.371524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.998 [2024-11-06 12:38:12.379501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.998 [2024-11-06 12:38:12.381992] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.998 [2024-11-06 12:38:12.382040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.998 "name": "Existed_Raid", 00:07:23.998 "uuid": "f18a8fca-9e6c-4aa0-ab3b-a0bc437d6ad4", 00:07:23.998 "strip_size_kb": 64, 00:07:23.998 "state": "configuring", 00:07:23.998 "raid_level": "raid0", 00:07:23.998 "superblock": true, 00:07:23.998 "num_base_bdevs": 2, 00:07:23.998 "num_base_bdevs_discovered": 1, 00:07:23.998 "num_base_bdevs_operational": 2, 00:07:23.998 "base_bdevs_list": [ 00:07:23.998 { 00:07:23.998 "name": "BaseBdev1", 00:07:23.998 "uuid": "e56f5292-cf0d-43f1-b371-16d8d7be1fc8", 00:07:23.998 "is_configured": true, 00:07:23.998 "data_offset": 2048, 00:07:23.998 "data_size": 63488 00:07:23.998 }, 00:07:23.998 { 00:07:23.998 "name": "BaseBdev2", 00:07:23.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.998 "is_configured": false, 00:07:23.998 "data_offset": 0, 00:07:23.998 "data_size": 0 00:07:23.998 } 00:07:23.998 ] 00:07:23.998 }' 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.998 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.257 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.257 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.257 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.516 [2024-11-06 12:38:12.933548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.516 [2024-11-06 12:38:12.933869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:24.516 [2024-11-06 12:38:12.933895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.516 BaseBdev2 00:07:24.516 [2024-11-06 12:38:12.934321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.516 [2024-11-06 12:38:12.934508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:24.516 [2024-11-06 12:38:12.934536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:24.516 [2024-11-06 12:38:12.934704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.516 [ 00:07:24.516 { 00:07:24.516 "name": "BaseBdev2", 00:07:24.516 "aliases": [ 00:07:24.516 "515a7d2b-304c-4e50-a234-b8af7c40a88b" 00:07:24.516 ], 00:07:24.516 "product_name": "Malloc disk", 00:07:24.516 "block_size": 512, 00:07:24.516 "num_blocks": 65536, 00:07:24.516 "uuid": "515a7d2b-304c-4e50-a234-b8af7c40a88b", 00:07:24.516 "assigned_rate_limits": { 00:07:24.516 "rw_ios_per_sec": 0, 00:07:24.516 "rw_mbytes_per_sec": 0, 00:07:24.516 "r_mbytes_per_sec": 0, 00:07:24.516 "w_mbytes_per_sec": 0 00:07:24.516 }, 00:07:24.516 "claimed": true, 00:07:24.516 "claim_type": "exclusive_write", 00:07:24.516 "zoned": false, 00:07:24.516 "supported_io_types": { 00:07:24.516 "read": true, 00:07:24.516 "write": true, 00:07:24.516 "unmap": true, 00:07:24.516 "flush": true, 00:07:24.516 "reset": true, 00:07:24.516 "nvme_admin": false, 00:07:24.516 "nvme_io": false, 00:07:24.516 "nvme_io_md": false, 00:07:24.516 "write_zeroes": true, 00:07:24.516 "zcopy": true, 00:07:24.516 "get_zone_info": false, 00:07:24.516 "zone_management": false, 00:07:24.516 "zone_append": false, 00:07:24.516 "compare": false, 00:07:24.516 "compare_and_write": false, 00:07:24.516 "abort": true, 00:07:24.516 "seek_hole": false, 00:07:24.516 "seek_data": false, 00:07:24.516 "copy": true, 00:07:24.516 "nvme_iov_md": false 00:07:24.516 }, 00:07:24.516 "memory_domains": [ 00:07:24.516 { 00:07:24.516 "dma_device_id": "system", 00:07:24.516 "dma_device_type": 1 00:07:24.516 }, 00:07:24.516 { 00:07:24.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.516 "dma_device_type": 2 00:07:24.516 } 00:07:24.516 ], 00:07:24.516 "driver_specific": {} 00:07:24.516 } 00:07:24.516 ] 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.516 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 12:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.517 12:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.517 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.517 "name": "Existed_Raid", 00:07:24.517 "uuid": "f18a8fca-9e6c-4aa0-ab3b-a0bc437d6ad4", 00:07:24.517 "strip_size_kb": 64, 00:07:24.517 "state": "online", 00:07:24.517 "raid_level": "raid0", 00:07:24.517 "superblock": true, 00:07:24.517 "num_base_bdevs": 2, 00:07:24.517 "num_base_bdevs_discovered": 2, 00:07:24.517 "num_base_bdevs_operational": 2, 00:07:24.517 "base_bdevs_list": [ 00:07:24.517 { 00:07:24.517 "name": "BaseBdev1", 00:07:24.517 "uuid": "e56f5292-cf0d-43f1-b371-16d8d7be1fc8", 00:07:24.517 "is_configured": true, 00:07:24.517 "data_offset": 2048, 00:07:24.517 "data_size": 63488 00:07:24.517 }, 00:07:24.517 { 00:07:24.517 "name": "BaseBdev2", 00:07:24.517 "uuid": "515a7d2b-304c-4e50-a234-b8af7c40a88b", 00:07:24.517 "is_configured": true, 00:07:24.517 "data_offset": 2048, 00:07:24.517 "data_size": 63488 00:07:24.517 } 00:07:24.517 ] 00:07:24.517 }' 00:07:24.517 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.517 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.084 [2024-11-06 12:38:13.474402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.084 "name": "Existed_Raid", 00:07:25.084 "aliases": [ 00:07:25.084 "f18a8fca-9e6c-4aa0-ab3b-a0bc437d6ad4" 00:07:25.084 ], 00:07:25.084 "product_name": "Raid Volume", 00:07:25.084 "block_size": 512, 00:07:25.084 "num_blocks": 126976, 00:07:25.084 "uuid": "f18a8fca-9e6c-4aa0-ab3b-a0bc437d6ad4", 00:07:25.084 "assigned_rate_limits": { 00:07:25.084 "rw_ios_per_sec": 0, 00:07:25.084 "rw_mbytes_per_sec": 0, 00:07:25.084 "r_mbytes_per_sec": 0, 00:07:25.084 "w_mbytes_per_sec": 0 00:07:25.084 }, 00:07:25.084 "claimed": false, 00:07:25.084 "zoned": false, 00:07:25.084 "supported_io_types": { 00:07:25.084 "read": true, 00:07:25.084 "write": true, 00:07:25.084 "unmap": true, 00:07:25.084 "flush": true, 00:07:25.084 "reset": true, 00:07:25.084 "nvme_admin": false, 00:07:25.084 "nvme_io": false, 00:07:25.084 "nvme_io_md": false, 00:07:25.084 "write_zeroes": true, 00:07:25.084 "zcopy": false, 00:07:25.084 "get_zone_info": false, 00:07:25.084 "zone_management": false, 00:07:25.084 "zone_append": false, 00:07:25.084 "compare": false, 00:07:25.084 "compare_and_write": false, 00:07:25.084 "abort": false, 00:07:25.084 "seek_hole": false, 00:07:25.084 "seek_data": false, 00:07:25.084 "copy": false, 00:07:25.084 "nvme_iov_md": false 00:07:25.084 }, 00:07:25.084 "memory_domains": [ 00:07:25.084 { 00:07:25.084 "dma_device_id": "system", 00:07:25.084 "dma_device_type": 1 00:07:25.084 }, 00:07:25.084 { 00:07:25.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.084 "dma_device_type": 2 00:07:25.084 }, 00:07:25.084 { 00:07:25.084 "dma_device_id": "system", 00:07:25.084 "dma_device_type": 1 00:07:25.084 }, 00:07:25.084 { 00:07:25.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.084 "dma_device_type": 2 00:07:25.084 } 00:07:25.084 ], 00:07:25.084 "driver_specific": { 00:07:25.084 "raid": { 00:07:25.084 "uuid": "f18a8fca-9e6c-4aa0-ab3b-a0bc437d6ad4", 00:07:25.084 "strip_size_kb": 64, 00:07:25.084 "state": "online", 00:07:25.084 "raid_level": "raid0", 00:07:25.084 "superblock": true, 00:07:25.084 "num_base_bdevs": 2, 00:07:25.084 "num_base_bdevs_discovered": 2, 00:07:25.084 "num_base_bdevs_operational": 2, 00:07:25.084 "base_bdevs_list": [ 00:07:25.084 { 00:07:25.084 "name": "BaseBdev1", 00:07:25.084 "uuid": "e56f5292-cf0d-43f1-b371-16d8d7be1fc8", 00:07:25.084 "is_configured": true, 00:07:25.084 "data_offset": 2048, 00:07:25.084 "data_size": 63488 00:07:25.084 }, 00:07:25.084 { 00:07:25.084 "name": "BaseBdev2", 00:07:25.084 "uuid": "515a7d2b-304c-4e50-a234-b8af7c40a88b", 00:07:25.084 "is_configured": true, 00:07:25.084 "data_offset": 2048, 00:07:25.084 "data_size": 63488 00:07:25.084 } 00:07:25.084 ] 00:07:25.084 } 00:07:25.084 } 00:07:25.084 }' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:25.084 BaseBdev2' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.084 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 [2024-11-06 12:38:13.734211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:25.084 [2024-11-06 12:38:13.734256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.084 [2024-11-06 12:38:13.734322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.343 "name": "Existed_Raid", 00:07:25.343 "uuid": "f18a8fca-9e6c-4aa0-ab3b-a0bc437d6ad4", 00:07:25.343 "strip_size_kb": 64, 00:07:25.343 "state": "offline", 00:07:25.343 "raid_level": "raid0", 00:07:25.343 "superblock": true, 00:07:25.343 "num_base_bdevs": 2, 00:07:25.343 "num_base_bdevs_discovered": 1, 00:07:25.343 "num_base_bdevs_operational": 1, 00:07:25.343 "base_bdevs_list": [ 00:07:25.343 { 00:07:25.343 "name": null, 00:07:25.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.343 "is_configured": false, 00:07:25.343 "data_offset": 0, 00:07:25.343 "data_size": 63488 00:07:25.343 }, 00:07:25.343 { 00:07:25.343 "name": "BaseBdev2", 00:07:25.343 "uuid": "515a7d2b-304c-4e50-a234-b8af7c40a88b", 00:07:25.343 "is_configured": true, 00:07:25.343 "data_offset": 2048, 00:07:25.343 "data_size": 63488 00:07:25.343 } 00:07:25.343 ] 00:07:25.343 }' 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.343 12:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.911 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.911 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.911 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.911 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.911 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.911 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.912 [2024-11-06 12:38:14.352163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.912 [2024-11-06 12:38:14.352244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60908 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60908 ']' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60908 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60908 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:25.912 killing process with pid 60908 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60908' 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60908 00:07:25.912 [2024-11-06 12:38:14.519053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.912 12:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60908 00:07:25.912 [2024-11-06 12:38:14.533659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.289 12:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:27.289 00:07:27.289 real 0m5.410s 00:07:27.289 user 0m8.179s 00:07:27.289 sys 0m0.753s 00:07:27.289 12:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.289 ************************************ 00:07:27.289 END TEST raid_state_function_test_sb 00:07:27.289 12:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.289 ************************************ 00:07:27.289 12:38:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:27.289 12:38:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:27.289 12:38:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.289 12:38:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.289 ************************************ 00:07:27.289 START TEST raid_superblock_test 00:07:27.289 ************************************ 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61160 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61160 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61160 ']' 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.289 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:27.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.290 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.290 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:27.290 12:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.290 [2024-11-06 12:38:15.714205] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:27.290 [2024-11-06 12:38:15.714392] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61160 ] 00:07:27.290 [2024-11-06 12:38:15.899077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.548 [2024-11-06 12:38:16.028962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.807 [2024-11-06 12:38:16.234773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.807 [2024-11-06 12:38:16.234850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.375 malloc1 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.375 [2024-11-06 12:38:16.886331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:28.375 [2024-11-06 12:38:16.886405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.375 [2024-11-06 12:38:16.886439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:28.375 [2024-11-06 12:38:16.886455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.375 [2024-11-06 12:38:16.889155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.375 [2024-11-06 12:38:16.889216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:28.375 pt1 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.375 malloc2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.375 [2024-11-06 12:38:16.938303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:28.375 [2024-11-06 12:38:16.938378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.375 [2024-11-06 12:38:16.938412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:28.375 [2024-11-06 12:38:16.938427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.375 [2024-11-06 12:38:16.941157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.375 [2024-11-06 12:38:16.941210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:28.375 pt2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.375 [2024-11-06 12:38:16.950406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:28.375 [2024-11-06 12:38:16.952847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.375 [2024-11-06 12:38:16.953062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:28.375 [2024-11-06 12:38:16.953089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.375 [2024-11-06 12:38:16.953426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.375 [2024-11-06 12:38:16.953632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:28.375 [2024-11-06 12:38:16.953659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:28.375 [2024-11-06 12:38:16.953850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.375 12:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.375 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.375 "name": "raid_bdev1", 00:07:28.375 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:28.375 "strip_size_kb": 64, 00:07:28.375 "state": "online", 00:07:28.376 "raid_level": "raid0", 00:07:28.376 "superblock": true, 00:07:28.376 "num_base_bdevs": 2, 00:07:28.376 "num_base_bdevs_discovered": 2, 00:07:28.376 "num_base_bdevs_operational": 2, 00:07:28.376 "base_bdevs_list": [ 00:07:28.376 { 00:07:28.376 "name": "pt1", 00:07:28.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.376 "is_configured": true, 00:07:28.376 "data_offset": 2048, 00:07:28.376 "data_size": 63488 00:07:28.376 }, 00:07:28.376 { 00:07:28.376 "name": "pt2", 00:07:28.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.376 "is_configured": true, 00:07:28.376 "data_offset": 2048, 00:07:28.376 "data_size": 63488 00:07:28.376 } 00:07:28.376 ] 00:07:28.376 }' 00:07:28.376 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.376 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.943 [2024-11-06 12:38:17.482855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.943 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.943 "name": "raid_bdev1", 00:07:28.943 "aliases": [ 00:07:28.943 "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4" 00:07:28.943 ], 00:07:28.943 "product_name": "Raid Volume", 00:07:28.943 "block_size": 512, 00:07:28.943 "num_blocks": 126976, 00:07:28.943 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:28.943 "assigned_rate_limits": { 00:07:28.943 "rw_ios_per_sec": 0, 00:07:28.943 "rw_mbytes_per_sec": 0, 00:07:28.943 "r_mbytes_per_sec": 0, 00:07:28.943 "w_mbytes_per_sec": 0 00:07:28.943 }, 00:07:28.943 "claimed": false, 00:07:28.943 "zoned": false, 00:07:28.943 "supported_io_types": { 00:07:28.943 "read": true, 00:07:28.943 "write": true, 00:07:28.943 "unmap": true, 00:07:28.943 "flush": true, 00:07:28.943 "reset": true, 00:07:28.943 "nvme_admin": false, 00:07:28.943 "nvme_io": false, 00:07:28.943 "nvme_io_md": false, 00:07:28.943 "write_zeroes": true, 00:07:28.943 "zcopy": false, 00:07:28.943 "get_zone_info": false, 00:07:28.943 "zone_management": false, 00:07:28.943 "zone_append": false, 00:07:28.943 "compare": false, 00:07:28.943 "compare_and_write": false, 00:07:28.943 "abort": false, 00:07:28.943 "seek_hole": false, 00:07:28.943 "seek_data": false, 00:07:28.943 "copy": false, 00:07:28.943 "nvme_iov_md": false 00:07:28.943 }, 00:07:28.943 "memory_domains": [ 00:07:28.943 { 00:07:28.943 "dma_device_id": "system", 00:07:28.943 "dma_device_type": 1 00:07:28.943 }, 00:07:28.943 { 00:07:28.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.943 "dma_device_type": 2 00:07:28.943 }, 00:07:28.943 { 00:07:28.943 "dma_device_id": "system", 00:07:28.944 "dma_device_type": 1 00:07:28.944 }, 00:07:28.944 { 00:07:28.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.944 "dma_device_type": 2 00:07:28.944 } 00:07:28.944 ], 00:07:28.944 "driver_specific": { 00:07:28.944 "raid": { 00:07:28.944 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:28.944 "strip_size_kb": 64, 00:07:28.944 "state": "online", 00:07:28.944 "raid_level": "raid0", 00:07:28.944 "superblock": true, 00:07:28.944 "num_base_bdevs": 2, 00:07:28.944 "num_base_bdevs_discovered": 2, 00:07:28.944 "num_base_bdevs_operational": 2, 00:07:28.944 "base_bdevs_list": [ 00:07:28.944 { 00:07:28.944 "name": "pt1", 00:07:28.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.944 "is_configured": true, 00:07:28.944 "data_offset": 2048, 00:07:28.944 "data_size": 63488 00:07:28.944 }, 00:07:28.944 { 00:07:28.944 "name": "pt2", 00:07:28.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.944 "is_configured": true, 00:07:28.944 "data_offset": 2048, 00:07:28.944 "data_size": 63488 00:07:28.944 } 00:07:28.944 ] 00:07:28.944 } 00:07:28.944 } 00:07:28.944 }' 00:07:28.944 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.944 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.944 pt2' 00:07:28.944 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 [2024-11-06 12:38:17.742934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4 ']' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 [2024-11-06 12:38:17.786565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.202 [2024-11-06 12:38:17.786599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.202 [2024-11-06 12:38:17.786706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.202 [2024-11-06 12:38:17.786770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.202 [2024-11-06 12:38:17.786789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:29.202 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.203 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 [2024-11-06 12:38:17.918677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:29.462 [2024-11-06 12:38:17.921115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:29.462 [2024-11-06 12:38:17.921225] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:29.462 [2024-11-06 12:38:17.921296] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:29.462 [2024-11-06 12:38:17.921334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.462 [2024-11-06 12:38:17.921352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:29.462 request: 00:07:29.462 { 00:07:29.462 "name": "raid_bdev1", 00:07:29.462 "raid_level": "raid0", 00:07:29.462 "base_bdevs": [ 00:07:29.462 "malloc1", 00:07:29.462 "malloc2" 00:07:29.462 ], 00:07:29.462 "strip_size_kb": 64, 00:07:29.462 "superblock": false, 00:07:29.462 "method": "bdev_raid_create", 00:07:29.462 "req_id": 1 00:07:29.462 } 00:07:29.462 Got JSON-RPC error response 00:07:29.462 response: 00:07:29.462 { 00:07:29.462 "code": -17, 00:07:29.462 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:29.462 } 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 [2024-11-06 12:38:17.986669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:29.462 [2024-11-06 12:38:17.986751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.462 [2024-11-06 12:38:17.986781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:29.462 [2024-11-06 12:38:17.986799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.462 [2024-11-06 12:38:17.989668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.462 [2024-11-06 12:38:17.989730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:29.462 [2024-11-06 12:38:17.989838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:29.462 [2024-11-06 12:38:17.989918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:29.462 pt1 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.462 12:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.462 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.462 "name": "raid_bdev1", 00:07:29.462 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:29.462 "strip_size_kb": 64, 00:07:29.462 "state": "configuring", 00:07:29.462 "raid_level": "raid0", 00:07:29.462 "superblock": true, 00:07:29.462 "num_base_bdevs": 2, 00:07:29.462 "num_base_bdevs_discovered": 1, 00:07:29.462 "num_base_bdevs_operational": 2, 00:07:29.462 "base_bdevs_list": [ 00:07:29.462 { 00:07:29.462 "name": "pt1", 00:07:29.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:29.462 "is_configured": true, 00:07:29.462 "data_offset": 2048, 00:07:29.462 "data_size": 63488 00:07:29.462 }, 00:07:29.462 { 00:07:29.462 "name": null, 00:07:29.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.462 "is_configured": false, 00:07:29.462 "data_offset": 2048, 00:07:29.462 "data_size": 63488 00:07:29.462 } 00:07:29.462 ] 00:07:29.462 }' 00:07:29.462 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.462 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.075 [2024-11-06 12:38:18.494782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.075 [2024-11-06 12:38:18.494866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.075 [2024-11-06 12:38:18.494898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:30.075 [2024-11-06 12:38:18.494916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.075 [2024-11-06 12:38:18.495508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.075 [2024-11-06 12:38:18.495546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.075 [2024-11-06 12:38:18.495647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:30.075 [2024-11-06 12:38:18.495690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.075 [2024-11-06 12:38:18.495830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:30.075 [2024-11-06 12:38:18.495850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.075 [2024-11-06 12:38:18.496137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:30.075 [2024-11-06 12:38:18.496351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:30.075 [2024-11-06 12:38:18.496378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:30.075 [2024-11-06 12:38:18.496554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.075 pt2 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.075 "name": "raid_bdev1", 00:07:30.075 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:30.075 "strip_size_kb": 64, 00:07:30.075 "state": "online", 00:07:30.075 "raid_level": "raid0", 00:07:30.075 "superblock": true, 00:07:30.075 "num_base_bdevs": 2, 00:07:30.075 "num_base_bdevs_discovered": 2, 00:07:30.075 "num_base_bdevs_operational": 2, 00:07:30.075 "base_bdevs_list": [ 00:07:30.075 { 00:07:30.075 "name": "pt1", 00:07:30.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.075 "is_configured": true, 00:07:30.075 "data_offset": 2048, 00:07:30.075 "data_size": 63488 00:07:30.075 }, 00:07:30.075 { 00:07:30.075 "name": "pt2", 00:07:30.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.075 "is_configured": true, 00:07:30.075 "data_offset": 2048, 00:07:30.075 "data_size": 63488 00:07:30.075 } 00:07:30.075 ] 00:07:30.075 }' 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.075 12:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.642 [2024-11-06 12:38:19.027238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.642 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.642 "name": "raid_bdev1", 00:07:30.642 "aliases": [ 00:07:30.642 "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4" 00:07:30.642 ], 00:07:30.642 "product_name": "Raid Volume", 00:07:30.642 "block_size": 512, 00:07:30.642 "num_blocks": 126976, 00:07:30.642 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:30.642 "assigned_rate_limits": { 00:07:30.642 "rw_ios_per_sec": 0, 00:07:30.642 "rw_mbytes_per_sec": 0, 00:07:30.642 "r_mbytes_per_sec": 0, 00:07:30.643 "w_mbytes_per_sec": 0 00:07:30.643 }, 00:07:30.643 "claimed": false, 00:07:30.643 "zoned": false, 00:07:30.643 "supported_io_types": { 00:07:30.643 "read": true, 00:07:30.643 "write": true, 00:07:30.643 "unmap": true, 00:07:30.643 "flush": true, 00:07:30.643 "reset": true, 00:07:30.643 "nvme_admin": false, 00:07:30.643 "nvme_io": false, 00:07:30.643 "nvme_io_md": false, 00:07:30.643 "write_zeroes": true, 00:07:30.643 "zcopy": false, 00:07:30.643 "get_zone_info": false, 00:07:30.643 "zone_management": false, 00:07:30.643 "zone_append": false, 00:07:30.643 "compare": false, 00:07:30.643 "compare_and_write": false, 00:07:30.643 "abort": false, 00:07:30.643 "seek_hole": false, 00:07:30.643 "seek_data": false, 00:07:30.643 "copy": false, 00:07:30.643 "nvme_iov_md": false 00:07:30.643 }, 00:07:30.643 "memory_domains": [ 00:07:30.643 { 00:07:30.643 "dma_device_id": "system", 00:07:30.643 "dma_device_type": 1 00:07:30.643 }, 00:07:30.643 { 00:07:30.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.643 "dma_device_type": 2 00:07:30.643 }, 00:07:30.643 { 00:07:30.643 "dma_device_id": "system", 00:07:30.643 "dma_device_type": 1 00:07:30.643 }, 00:07:30.643 { 00:07:30.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.643 "dma_device_type": 2 00:07:30.643 } 00:07:30.643 ], 00:07:30.643 "driver_specific": { 00:07:30.643 "raid": { 00:07:30.643 "uuid": "6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4", 00:07:30.643 "strip_size_kb": 64, 00:07:30.643 "state": "online", 00:07:30.643 "raid_level": "raid0", 00:07:30.643 "superblock": true, 00:07:30.643 "num_base_bdevs": 2, 00:07:30.643 "num_base_bdevs_discovered": 2, 00:07:30.643 "num_base_bdevs_operational": 2, 00:07:30.643 "base_bdevs_list": [ 00:07:30.643 { 00:07:30.643 "name": "pt1", 00:07:30.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.643 "is_configured": true, 00:07:30.643 "data_offset": 2048, 00:07:30.643 "data_size": 63488 00:07:30.643 }, 00:07:30.643 { 00:07:30.643 "name": "pt2", 00:07:30.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.643 "is_configured": true, 00:07:30.643 "data_offset": 2048, 00:07:30.643 "data_size": 63488 00:07:30.643 } 00:07:30.643 ] 00:07:30.643 } 00:07:30.643 } 00:07:30.643 }' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:30.643 pt2' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.643 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.643 [2024-11-06 12:38:19.295300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4 '!=' 6bebfe4b-6ff1-44d5-9bbd-b1b962bd0cf4 ']' 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61160 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61160 ']' 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61160 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61160 00:07:30.902 killing process with pid 61160 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61160' 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61160 00:07:30.902 [2024-11-06 12:38:19.374939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.902 12:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61160 00:07:30.902 [2024-11-06 12:38:19.375053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.902 [2024-11-06 12:38:19.375117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.902 [2024-11-06 12:38:19.375136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:31.159 [2024-11-06 12:38:19.560415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.095 ************************************ 00:07:32.095 END TEST raid_superblock_test 00:07:32.095 ************************************ 00:07:32.095 12:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:32.095 00:07:32.095 real 0m4.970s 00:07:32.095 user 0m7.410s 00:07:32.095 sys 0m0.694s 00:07:32.095 12:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.095 12:38:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.095 12:38:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:32.095 12:38:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:32.095 12:38:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.095 12:38:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.095 ************************************ 00:07:32.095 START TEST raid_read_error_test 00:07:32.095 ************************************ 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Y8yHhqqc9u 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61377 00:07:32.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61377 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61377 ']' 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.095 12:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.354 [2024-11-06 12:38:20.770833] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:32.354 [2024-11-06 12:38:20.771031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61377 ] 00:07:32.354 [2024-11-06 12:38:20.958383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.612 [2024-11-06 12:38:21.096350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.870 [2024-11-06 12:38:21.301508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.870 [2024-11-06 12:38:21.301583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.129 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.129 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:33.129 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.129 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.129 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.129 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 BaseBdev1_malloc 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 true 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 [2024-11-06 12:38:21.825358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.414 [2024-11-06 12:38:21.825421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.414 [2024-11-06 12:38:21.825464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:33.414 [2024-11-06 12:38:21.825480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.414 [2024-11-06 12:38:21.828573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.414 [2024-11-06 12:38:21.828665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.414 BaseBdev1 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 BaseBdev2_malloc 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 true 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 [2024-11-06 12:38:21.891855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.414 [2024-11-06 12:38:21.891978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.414 [2024-11-06 12:38:21.892014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:33.414 [2024-11-06 12:38:21.892033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.414 [2024-11-06 12:38:21.894887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.414 [2024-11-06 12:38:21.894966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.414 BaseBdev2 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.414 [2024-11-06 12:38:21.903964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.414 [2024-11-06 12:38:21.906577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.414 [2024-11-06 12:38:21.907025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.414 [2024-11-06 12:38:21.907058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.414 [2024-11-06 12:38:21.907464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:33.414 [2024-11-06 12:38:21.907686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.414 [2024-11-06 12:38:21.907714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:33.414 [2024-11-06 12:38:21.907967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.414 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.415 "name": "raid_bdev1", 00:07:33.415 "uuid": "6511f7b5-e97e-49e7-a8cd-8ab244c8355d", 00:07:33.415 "strip_size_kb": 64, 00:07:33.415 "state": "online", 00:07:33.415 "raid_level": "raid0", 00:07:33.415 "superblock": true, 00:07:33.415 "num_base_bdevs": 2, 00:07:33.415 "num_base_bdevs_discovered": 2, 00:07:33.415 "num_base_bdevs_operational": 2, 00:07:33.415 "base_bdevs_list": [ 00:07:33.415 { 00:07:33.415 "name": "BaseBdev1", 00:07:33.415 "uuid": "d0098c37-68b1-5295-ad87-b66107a33d30", 00:07:33.415 "is_configured": true, 00:07:33.415 "data_offset": 2048, 00:07:33.415 "data_size": 63488 00:07:33.415 }, 00:07:33.415 { 00:07:33.415 "name": "BaseBdev2", 00:07:33.415 "uuid": "1c7b97ab-0cd1-5761-b46d-3cc89993b5f6", 00:07:33.415 "is_configured": true, 00:07:33.415 "data_offset": 2048, 00:07:33.415 "data_size": 63488 00:07:33.415 } 00:07:33.415 ] 00:07:33.415 }' 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.415 12:38:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.982 12:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.982 12:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.982 [2024-11-06 12:38:22.537453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:34.917 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.918 "name": "raid_bdev1", 00:07:34.918 "uuid": "6511f7b5-e97e-49e7-a8cd-8ab244c8355d", 00:07:34.918 "strip_size_kb": 64, 00:07:34.918 "state": "online", 00:07:34.918 "raid_level": "raid0", 00:07:34.918 "superblock": true, 00:07:34.918 "num_base_bdevs": 2, 00:07:34.918 "num_base_bdevs_discovered": 2, 00:07:34.918 "num_base_bdevs_operational": 2, 00:07:34.918 "base_bdevs_list": [ 00:07:34.918 { 00:07:34.918 "name": "BaseBdev1", 00:07:34.918 "uuid": "d0098c37-68b1-5295-ad87-b66107a33d30", 00:07:34.918 "is_configured": true, 00:07:34.918 "data_offset": 2048, 00:07:34.918 "data_size": 63488 00:07:34.918 }, 00:07:34.918 { 00:07:34.918 "name": "BaseBdev2", 00:07:34.918 "uuid": "1c7b97ab-0cd1-5761-b46d-3cc89993b5f6", 00:07:34.918 "is_configured": true, 00:07:34.918 "data_offset": 2048, 00:07:34.918 "data_size": 63488 00:07:34.918 } 00:07:34.918 ] 00:07:34.918 }' 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.918 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.484 [2024-11-06 12:38:23.981149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.484 [2024-11-06 12:38:23.981189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.484 [2024-11-06 12:38:23.984853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.484 [2024-11-06 12:38:23.984909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.484 [2024-11-06 12:38:23.984959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.484 [2024-11-06 12:38:23.984976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:35.484 { 00:07:35.484 "results": [ 00:07:35.484 { 00:07:35.484 "job": "raid_bdev1", 00:07:35.484 "core_mask": "0x1", 00:07:35.484 "workload": "randrw", 00:07:35.484 "percentage": 50, 00:07:35.484 "status": "finished", 00:07:35.484 "queue_depth": 1, 00:07:35.484 "io_size": 131072, 00:07:35.484 "runtime": 1.441366, 00:07:35.484 "iops": 11013.857687776734, 00:07:35.484 "mibps": 1376.7322109720917, 00:07:35.484 "io_failed": 1, 00:07:35.484 "io_timeout": 0, 00:07:35.484 "avg_latency_us": 126.75731464302892, 00:07:35.484 "min_latency_us": 36.07272727272727, 00:07:35.484 "max_latency_us": 1921.3963636363637 00:07:35.484 } 00:07:35.484 ], 00:07:35.484 "core_count": 1 00:07:35.484 } 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61377 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61377 ']' 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61377 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.484 12:38:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61377 00:07:35.484 killing process with pid 61377 00:07:35.484 12:38:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.484 12:38:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.484 12:38:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61377' 00:07:35.484 12:38:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61377 00:07:35.484 [2024-11-06 12:38:24.019602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.484 12:38:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61377 00:07:35.742 [2024-11-06 12:38:24.146439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Y8yHhqqc9u 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:07:36.700 00:07:36.700 real 0m4.631s 00:07:36.700 user 0m5.782s 00:07:36.700 sys 0m0.593s 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.700 ************************************ 00:07:36.700 END TEST raid_read_error_test 00:07:36.700 ************************************ 00:07:36.700 12:38:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.700 12:38:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:36.700 12:38:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:36.700 12:38:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.700 12:38:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.700 ************************************ 00:07:36.700 START TEST raid_write_error_test 00:07:36.700 ************************************ 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GzCekSEIfn 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61527 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61527 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61527 ']' 00:07:36.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.700 12:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.959 [2024-11-06 12:38:25.425353] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:36.959 [2024-11-06 12:38:25.425531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61527 ] 00:07:36.959 [2024-11-06 12:38:25.604067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.217 [2024-11-06 12:38:25.734205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.477 [2024-11-06 12:38:25.937795] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.477 [2024-11-06 12:38:25.938049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.735 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.735 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:37.735 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.735 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:37.735 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.735 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 BaseBdev1_malloc 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 true 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 [2024-11-06 12:38:26.438314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:37.995 [2024-11-06 12:38:26.438383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.995 [2024-11-06 12:38:26.438413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:37.995 [2024-11-06 12:38:26.438432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.995 [2024-11-06 12:38:26.441204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.995 [2024-11-06 12:38:26.441250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:37.995 BaseBdev1 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 BaseBdev2_malloc 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 true 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 [2024-11-06 12:38:26.502559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:37.995 [2024-11-06 12:38:26.502648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.995 [2024-11-06 12:38:26.502680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:37.995 [2024-11-06 12:38:26.502699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.995 [2024-11-06 12:38:26.505639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.995 [2024-11-06 12:38:26.505692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:37.995 BaseBdev2 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 [2024-11-06 12:38:26.510625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.995 [2024-11-06 12:38:26.513287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.995 [2024-11-06 12:38:26.513684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.995 [2024-11-06 12:38:26.513824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.995 [2024-11-06 12:38:26.514227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:37.995 [2024-11-06 12:38:26.514571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.995 [2024-11-06 12:38:26.514700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:37.995 [2024-11-06 12:38:26.515178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.995 "name": "raid_bdev1", 00:07:37.995 "uuid": "f2e8071b-28ca-48af-858a-ad2142c98a11", 00:07:37.995 "strip_size_kb": 64, 00:07:37.995 "state": "online", 00:07:37.995 "raid_level": "raid0", 00:07:37.995 "superblock": true, 00:07:37.995 "num_base_bdevs": 2, 00:07:37.995 "num_base_bdevs_discovered": 2, 00:07:37.995 "num_base_bdevs_operational": 2, 00:07:37.995 "base_bdevs_list": [ 00:07:37.995 { 00:07:37.995 "name": "BaseBdev1", 00:07:37.995 "uuid": "0db0e2a7-6b9f-5679-be93-8d17bdaa2135", 00:07:37.995 "is_configured": true, 00:07:37.995 "data_offset": 2048, 00:07:37.995 "data_size": 63488 00:07:37.995 }, 00:07:37.995 { 00:07:37.995 "name": "BaseBdev2", 00:07:37.995 "uuid": "f6f62c32-ed18-53ba-acee-2c04989e7328", 00:07:37.995 "is_configured": true, 00:07:37.995 "data_offset": 2048, 00:07:37.995 "data_size": 63488 00:07:37.995 } 00:07:37.995 ] 00:07:37.995 }' 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.995 12:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 12:38:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:38.563 12:38:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:38.563 [2024-11-06 12:38:27.152658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.513 "name": "raid_bdev1", 00:07:39.513 "uuid": "f2e8071b-28ca-48af-858a-ad2142c98a11", 00:07:39.513 "strip_size_kb": 64, 00:07:39.513 "state": "online", 00:07:39.513 "raid_level": "raid0", 00:07:39.513 "superblock": true, 00:07:39.513 "num_base_bdevs": 2, 00:07:39.513 "num_base_bdevs_discovered": 2, 00:07:39.513 "num_base_bdevs_operational": 2, 00:07:39.513 "base_bdevs_list": [ 00:07:39.513 { 00:07:39.513 "name": "BaseBdev1", 00:07:39.513 "uuid": "0db0e2a7-6b9f-5679-be93-8d17bdaa2135", 00:07:39.513 "is_configured": true, 00:07:39.513 "data_offset": 2048, 00:07:39.513 "data_size": 63488 00:07:39.513 }, 00:07:39.513 { 00:07:39.513 "name": "BaseBdev2", 00:07:39.513 "uuid": "f6f62c32-ed18-53ba-acee-2c04989e7328", 00:07:39.513 "is_configured": true, 00:07:39.513 "data_offset": 2048, 00:07:39.513 "data_size": 63488 00:07:39.513 } 00:07:39.513 ] 00:07:39.513 }' 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.513 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.081 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.081 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.081 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.081 [2024-11-06 12:38:28.555263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.081 [2024-11-06 12:38:28.555306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.081 [2024-11-06 12:38:28.558765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.081 [2024-11-06 12:38:28.558968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.081 [2024-11-06 12:38:28.559038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.081 [2024-11-06 12:38:28.559059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:40.081 { 00:07:40.081 "results": [ 00:07:40.081 { 00:07:40.081 "job": "raid_bdev1", 00:07:40.081 "core_mask": "0x1", 00:07:40.081 "workload": "randrw", 00:07:40.081 "percentage": 50, 00:07:40.082 "status": "finished", 00:07:40.082 "queue_depth": 1, 00:07:40.082 "io_size": 131072, 00:07:40.082 "runtime": 1.399978, 00:07:40.082 "iops": 11060.173802731186, 00:07:40.082 "mibps": 1382.5217253413982, 00:07:40.082 "io_failed": 1, 00:07:40.082 "io_timeout": 0, 00:07:40.082 "avg_latency_us": 126.17276402383537, 00:07:40.082 "min_latency_us": 42.123636363636365, 00:07:40.082 "max_latency_us": 1876.7127272727273 00:07:40.082 } 00:07:40.082 ], 00:07:40.082 "core_count": 1 00:07:40.082 } 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61527 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61527 ']' 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61527 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61527 00:07:40.082 killing process with pid 61527 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61527' 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61527 00:07:40.082 [2024-11-06 12:38:28.594967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.082 12:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61527 00:07:40.082 [2024-11-06 12:38:28.717372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GzCekSEIfn 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.457 ************************************ 00:07:41.457 END TEST raid_write_error_test 00:07:41.457 ************************************ 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:41.457 00:07:41.457 real 0m4.495s 00:07:41.457 user 0m5.629s 00:07:41.457 sys 0m0.537s 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.457 12:38:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.457 12:38:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:41.457 12:38:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:41.457 12:38:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:41.457 12:38:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.457 12:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.457 ************************************ 00:07:41.457 START TEST raid_state_function_test 00:07:41.457 ************************************ 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:41.457 Process raid pid: 61666 00:07:41.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61666 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61666' 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61666 00:07:41.457 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61666 ']' 00:07:41.458 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.458 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.458 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.458 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.458 12:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.458 [2024-11-06 12:38:29.956152] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:41.458 [2024-11-06 12:38:29.956469] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.717 [2024-11-06 12:38:30.125525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.717 [2024-11-06 12:38:30.256375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.975 [2024-11-06 12:38:30.462815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.975 [2024-11-06 12:38:30.463099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.549 [2024-11-06 12:38:30.949754] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.549 [2024-11-06 12:38:30.949818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.549 [2024-11-06 12:38:30.949835] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.549 [2024-11-06 12:38:30.949852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.549 12:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.549 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.549 "name": "Existed_Raid", 00:07:42.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.549 "strip_size_kb": 64, 00:07:42.549 "state": "configuring", 00:07:42.549 "raid_level": "concat", 00:07:42.549 "superblock": false, 00:07:42.549 "num_base_bdevs": 2, 00:07:42.549 "num_base_bdevs_discovered": 0, 00:07:42.549 "num_base_bdevs_operational": 2, 00:07:42.549 "base_bdevs_list": [ 00:07:42.549 { 00:07:42.549 "name": "BaseBdev1", 00:07:42.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.549 "is_configured": false, 00:07:42.549 "data_offset": 0, 00:07:42.549 "data_size": 0 00:07:42.549 }, 00:07:42.549 { 00:07:42.549 "name": "BaseBdev2", 00:07:42.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.549 "is_configured": false, 00:07:42.549 "data_offset": 0, 00:07:42.549 "data_size": 0 00:07:42.549 } 00:07:42.549 ] 00:07:42.549 }' 00:07:42.549 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.549 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.809 [2024-11-06 12:38:31.445837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.809 [2024-11-06 12:38:31.445882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.809 [2024-11-06 12:38:31.453807] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.809 [2024-11-06 12:38:31.453855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.809 [2024-11-06 12:38:31.453869] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.809 [2024-11-06 12:38:31.453887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.809 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.068 [2024-11-06 12:38:31.498754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.068 BaseBdev1 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.068 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.068 [ 00:07:43.068 { 00:07:43.068 "name": "BaseBdev1", 00:07:43.068 "aliases": [ 00:07:43.068 "1b519049-161a-4d51-8744-622921473b9d" 00:07:43.068 ], 00:07:43.068 "product_name": "Malloc disk", 00:07:43.068 "block_size": 512, 00:07:43.068 "num_blocks": 65536, 00:07:43.068 "uuid": "1b519049-161a-4d51-8744-622921473b9d", 00:07:43.068 "assigned_rate_limits": { 00:07:43.068 "rw_ios_per_sec": 0, 00:07:43.068 "rw_mbytes_per_sec": 0, 00:07:43.069 "r_mbytes_per_sec": 0, 00:07:43.069 "w_mbytes_per_sec": 0 00:07:43.069 }, 00:07:43.069 "claimed": true, 00:07:43.069 "claim_type": "exclusive_write", 00:07:43.069 "zoned": false, 00:07:43.069 "supported_io_types": { 00:07:43.069 "read": true, 00:07:43.069 "write": true, 00:07:43.069 "unmap": true, 00:07:43.069 "flush": true, 00:07:43.069 "reset": true, 00:07:43.069 "nvme_admin": false, 00:07:43.069 "nvme_io": false, 00:07:43.069 "nvme_io_md": false, 00:07:43.069 "write_zeroes": true, 00:07:43.069 "zcopy": true, 00:07:43.069 "get_zone_info": false, 00:07:43.069 "zone_management": false, 00:07:43.069 "zone_append": false, 00:07:43.069 "compare": false, 00:07:43.069 "compare_and_write": false, 00:07:43.069 "abort": true, 00:07:43.069 "seek_hole": false, 00:07:43.069 "seek_data": false, 00:07:43.069 "copy": true, 00:07:43.069 "nvme_iov_md": false 00:07:43.069 }, 00:07:43.069 "memory_domains": [ 00:07:43.069 { 00:07:43.069 "dma_device_id": "system", 00:07:43.069 "dma_device_type": 1 00:07:43.069 }, 00:07:43.069 { 00:07:43.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.069 "dma_device_type": 2 00:07:43.069 } 00:07:43.069 ], 00:07:43.069 "driver_specific": {} 00:07:43.069 } 00:07:43.069 ] 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.069 "name": "Existed_Raid", 00:07:43.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.069 "strip_size_kb": 64, 00:07:43.069 "state": "configuring", 00:07:43.069 "raid_level": "concat", 00:07:43.069 "superblock": false, 00:07:43.069 "num_base_bdevs": 2, 00:07:43.069 "num_base_bdevs_discovered": 1, 00:07:43.069 "num_base_bdevs_operational": 2, 00:07:43.069 "base_bdevs_list": [ 00:07:43.069 { 00:07:43.069 "name": "BaseBdev1", 00:07:43.069 "uuid": "1b519049-161a-4d51-8744-622921473b9d", 00:07:43.069 "is_configured": true, 00:07:43.069 "data_offset": 0, 00:07:43.069 "data_size": 65536 00:07:43.069 }, 00:07:43.069 { 00:07:43.069 "name": "BaseBdev2", 00:07:43.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.069 "is_configured": false, 00:07:43.069 "data_offset": 0, 00:07:43.069 "data_size": 0 00:07:43.069 } 00:07:43.069 ] 00:07:43.069 }' 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.069 12:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.636 [2024-11-06 12:38:32.034941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.636 [2024-11-06 12:38:32.035008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.636 [2024-11-06 12:38:32.042978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.636 [2024-11-06 12:38:32.045381] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.636 [2024-11-06 12:38:32.045439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.636 "name": "Existed_Raid", 00:07:43.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.636 "strip_size_kb": 64, 00:07:43.636 "state": "configuring", 00:07:43.636 "raid_level": "concat", 00:07:43.636 "superblock": false, 00:07:43.636 "num_base_bdevs": 2, 00:07:43.636 "num_base_bdevs_discovered": 1, 00:07:43.636 "num_base_bdevs_operational": 2, 00:07:43.636 "base_bdevs_list": [ 00:07:43.636 { 00:07:43.636 "name": "BaseBdev1", 00:07:43.636 "uuid": "1b519049-161a-4d51-8744-622921473b9d", 00:07:43.636 "is_configured": true, 00:07:43.636 "data_offset": 0, 00:07:43.636 "data_size": 65536 00:07:43.636 }, 00:07:43.636 { 00:07:43.636 "name": "BaseBdev2", 00:07:43.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.636 "is_configured": false, 00:07:43.636 "data_offset": 0, 00:07:43.636 "data_size": 0 00:07:43.636 } 00:07:43.636 ] 00:07:43.636 }' 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.636 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.895 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.895 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.895 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.154 [2024-11-06 12:38:32.589077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.154 [2024-11-06 12:38:32.589140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.154 [2024-11-06 12:38:32.589154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:44.154 [2024-11-06 12:38:32.589520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.154 [2024-11-06 12:38:32.589731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.154 [2024-11-06 12:38:32.589763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:44.154 [2024-11-06 12:38:32.590066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.154 BaseBdev2 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.154 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.155 [ 00:07:44.155 { 00:07:44.155 "name": "BaseBdev2", 00:07:44.155 "aliases": [ 00:07:44.155 "931f2d4c-616e-4c45-8932-de71f2d875fc" 00:07:44.155 ], 00:07:44.155 "product_name": "Malloc disk", 00:07:44.155 "block_size": 512, 00:07:44.155 "num_blocks": 65536, 00:07:44.155 "uuid": "931f2d4c-616e-4c45-8932-de71f2d875fc", 00:07:44.155 "assigned_rate_limits": { 00:07:44.155 "rw_ios_per_sec": 0, 00:07:44.155 "rw_mbytes_per_sec": 0, 00:07:44.155 "r_mbytes_per_sec": 0, 00:07:44.155 "w_mbytes_per_sec": 0 00:07:44.155 }, 00:07:44.155 "claimed": true, 00:07:44.155 "claim_type": "exclusive_write", 00:07:44.155 "zoned": false, 00:07:44.155 "supported_io_types": { 00:07:44.155 "read": true, 00:07:44.155 "write": true, 00:07:44.155 "unmap": true, 00:07:44.155 "flush": true, 00:07:44.155 "reset": true, 00:07:44.155 "nvme_admin": false, 00:07:44.155 "nvme_io": false, 00:07:44.155 "nvme_io_md": false, 00:07:44.155 "write_zeroes": true, 00:07:44.155 "zcopy": true, 00:07:44.155 "get_zone_info": false, 00:07:44.155 "zone_management": false, 00:07:44.155 "zone_append": false, 00:07:44.155 "compare": false, 00:07:44.155 "compare_and_write": false, 00:07:44.155 "abort": true, 00:07:44.155 "seek_hole": false, 00:07:44.155 "seek_data": false, 00:07:44.155 "copy": true, 00:07:44.155 "nvme_iov_md": false 00:07:44.155 }, 00:07:44.155 "memory_domains": [ 00:07:44.155 { 00:07:44.155 "dma_device_id": "system", 00:07:44.155 "dma_device_type": 1 00:07:44.155 }, 00:07:44.155 { 00:07:44.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.155 "dma_device_type": 2 00:07:44.155 } 00:07:44.155 ], 00:07:44.155 "driver_specific": {} 00:07:44.155 } 00:07:44.155 ] 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.155 "name": "Existed_Raid", 00:07:44.155 "uuid": "8d8fd2c6-4248-414a-be13-48df44da2f00", 00:07:44.155 "strip_size_kb": 64, 00:07:44.155 "state": "online", 00:07:44.155 "raid_level": "concat", 00:07:44.155 "superblock": false, 00:07:44.155 "num_base_bdevs": 2, 00:07:44.155 "num_base_bdevs_discovered": 2, 00:07:44.155 "num_base_bdevs_operational": 2, 00:07:44.155 "base_bdevs_list": [ 00:07:44.155 { 00:07:44.155 "name": "BaseBdev1", 00:07:44.155 "uuid": "1b519049-161a-4d51-8744-622921473b9d", 00:07:44.155 "is_configured": true, 00:07:44.155 "data_offset": 0, 00:07:44.155 "data_size": 65536 00:07:44.155 }, 00:07:44.155 { 00:07:44.155 "name": "BaseBdev2", 00:07:44.155 "uuid": "931f2d4c-616e-4c45-8932-de71f2d875fc", 00:07:44.155 "is_configured": true, 00:07:44.155 "data_offset": 0, 00:07:44.155 "data_size": 65536 00:07:44.155 } 00:07:44.155 ] 00:07:44.155 }' 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.155 12:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.721 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.722 [2024-11-06 12:38:33.129636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.722 "name": "Existed_Raid", 00:07:44.722 "aliases": [ 00:07:44.722 "8d8fd2c6-4248-414a-be13-48df44da2f00" 00:07:44.722 ], 00:07:44.722 "product_name": "Raid Volume", 00:07:44.722 "block_size": 512, 00:07:44.722 "num_blocks": 131072, 00:07:44.722 "uuid": "8d8fd2c6-4248-414a-be13-48df44da2f00", 00:07:44.722 "assigned_rate_limits": { 00:07:44.722 "rw_ios_per_sec": 0, 00:07:44.722 "rw_mbytes_per_sec": 0, 00:07:44.722 "r_mbytes_per_sec": 0, 00:07:44.722 "w_mbytes_per_sec": 0 00:07:44.722 }, 00:07:44.722 "claimed": false, 00:07:44.722 "zoned": false, 00:07:44.722 "supported_io_types": { 00:07:44.722 "read": true, 00:07:44.722 "write": true, 00:07:44.722 "unmap": true, 00:07:44.722 "flush": true, 00:07:44.722 "reset": true, 00:07:44.722 "nvme_admin": false, 00:07:44.722 "nvme_io": false, 00:07:44.722 "nvme_io_md": false, 00:07:44.722 "write_zeroes": true, 00:07:44.722 "zcopy": false, 00:07:44.722 "get_zone_info": false, 00:07:44.722 "zone_management": false, 00:07:44.722 "zone_append": false, 00:07:44.722 "compare": false, 00:07:44.722 "compare_and_write": false, 00:07:44.722 "abort": false, 00:07:44.722 "seek_hole": false, 00:07:44.722 "seek_data": false, 00:07:44.722 "copy": false, 00:07:44.722 "nvme_iov_md": false 00:07:44.722 }, 00:07:44.722 "memory_domains": [ 00:07:44.722 { 00:07:44.722 "dma_device_id": "system", 00:07:44.722 "dma_device_type": 1 00:07:44.722 }, 00:07:44.722 { 00:07:44.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.722 "dma_device_type": 2 00:07:44.722 }, 00:07:44.722 { 00:07:44.722 "dma_device_id": "system", 00:07:44.722 "dma_device_type": 1 00:07:44.722 }, 00:07:44.722 { 00:07:44.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.722 "dma_device_type": 2 00:07:44.722 } 00:07:44.722 ], 00:07:44.722 "driver_specific": { 00:07:44.722 "raid": { 00:07:44.722 "uuid": "8d8fd2c6-4248-414a-be13-48df44da2f00", 00:07:44.722 "strip_size_kb": 64, 00:07:44.722 "state": "online", 00:07:44.722 "raid_level": "concat", 00:07:44.722 "superblock": false, 00:07:44.722 "num_base_bdevs": 2, 00:07:44.722 "num_base_bdevs_discovered": 2, 00:07:44.722 "num_base_bdevs_operational": 2, 00:07:44.722 "base_bdevs_list": [ 00:07:44.722 { 00:07:44.722 "name": "BaseBdev1", 00:07:44.722 "uuid": "1b519049-161a-4d51-8744-622921473b9d", 00:07:44.722 "is_configured": true, 00:07:44.722 "data_offset": 0, 00:07:44.722 "data_size": 65536 00:07:44.722 }, 00:07:44.722 { 00:07:44.722 "name": "BaseBdev2", 00:07:44.722 "uuid": "931f2d4c-616e-4c45-8932-de71f2d875fc", 00:07:44.722 "is_configured": true, 00:07:44.722 "data_offset": 0, 00:07:44.722 "data_size": 65536 00:07:44.722 } 00:07:44.722 ] 00:07:44.722 } 00:07:44.722 } 00:07:44.722 }' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:44.722 BaseBdev2' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.722 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.981 [2024-11-06 12:38:33.389431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.981 [2024-11-06 12:38:33.389480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.981 [2024-11-06 12:38:33.389557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.981 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.981 "name": "Existed_Raid", 00:07:44.981 "uuid": "8d8fd2c6-4248-414a-be13-48df44da2f00", 00:07:44.981 "strip_size_kb": 64, 00:07:44.981 "state": "offline", 00:07:44.981 "raid_level": "concat", 00:07:44.981 "superblock": false, 00:07:44.981 "num_base_bdevs": 2, 00:07:44.981 "num_base_bdevs_discovered": 1, 00:07:44.981 "num_base_bdevs_operational": 1, 00:07:44.981 "base_bdevs_list": [ 00:07:44.981 { 00:07:44.981 "name": null, 00:07:44.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.981 "is_configured": false, 00:07:44.981 "data_offset": 0, 00:07:44.981 "data_size": 65536 00:07:44.981 }, 00:07:44.981 { 00:07:44.982 "name": "BaseBdev2", 00:07:44.982 "uuid": "931f2d4c-616e-4c45-8932-de71f2d875fc", 00:07:44.982 "is_configured": true, 00:07:44.982 "data_offset": 0, 00:07:44.982 "data_size": 65536 00:07:44.982 } 00:07:44.982 ] 00:07:44.982 }' 00:07:44.982 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.982 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:45.613 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.613 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.613 12:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:45.613 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.613 12:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 [2024-11-06 12:38:34.046731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:45.613 [2024-11-06 12:38:34.046799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61666 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61666 ']' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61666 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61666 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:45.613 killing process with pid 61666 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61666' 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61666 00:07:45.613 [2024-11-06 12:38:34.221673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.613 12:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61666 00:07:45.613 [2024-11-06 12:38:34.236255] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:46.988 00:07:46.988 real 0m5.400s 00:07:46.988 user 0m8.212s 00:07:46.988 sys 0m0.730s 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.988 ************************************ 00:07:46.988 END TEST raid_state_function_test 00:07:46.988 ************************************ 00:07:46.988 12:38:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:46.988 12:38:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:46.988 12:38:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.988 12:38:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.988 ************************************ 00:07:46.988 START TEST raid_state_function_test_sb 00:07:46.988 ************************************ 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:46.988 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:46.989 Process raid pid: 61925 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61925 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61925' 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61925 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61925 ']' 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.989 12:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.989 [2024-11-06 12:38:35.404506] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:46.989 [2024-11-06 12:38:35.404705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.989 [2024-11-06 12:38:35.583831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.247 [2024-11-06 12:38:35.718668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.505 [2024-11-06 12:38:35.932532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.505 [2024-11-06 12:38:35.932573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.069 [2024-11-06 12:38:36.468035] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.069 [2024-11-06 12:38:36.468285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.069 [2024-11-06 12:38:36.468314] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.069 [2024-11-06 12:38:36.468342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.069 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.070 "name": "Existed_Raid", 00:07:48.070 "uuid": "27dacb8b-1752-48b8-8f40-8faadecfe547", 00:07:48.070 "strip_size_kb": 64, 00:07:48.070 "state": "configuring", 00:07:48.070 "raid_level": "concat", 00:07:48.070 "superblock": true, 00:07:48.070 "num_base_bdevs": 2, 00:07:48.070 "num_base_bdevs_discovered": 0, 00:07:48.070 "num_base_bdevs_operational": 2, 00:07:48.070 "base_bdevs_list": [ 00:07:48.070 { 00:07:48.070 "name": "BaseBdev1", 00:07:48.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.070 "is_configured": false, 00:07:48.070 "data_offset": 0, 00:07:48.070 "data_size": 0 00:07:48.070 }, 00:07:48.070 { 00:07:48.070 "name": "BaseBdev2", 00:07:48.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.070 "is_configured": false, 00:07:48.070 "data_offset": 0, 00:07:48.070 "data_size": 0 00:07:48.070 } 00:07:48.070 ] 00:07:48.070 }' 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.070 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 [2024-11-06 12:38:36.960123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.328 [2024-11-06 12:38:36.960170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 [2024-11-06 12:38:36.968127] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.328 [2024-11-06 12:38:36.968180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.328 [2024-11-06 12:38:36.968208] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.328 [2024-11-06 12:38:36.968230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.328 12:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.586 [2024-11-06 12:38:37.013717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.587 BaseBdev1 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.587 [ 00:07:48.587 { 00:07:48.587 "name": "BaseBdev1", 00:07:48.587 "aliases": [ 00:07:48.587 "d84715c4-25f6-42ff-bf38-566d3524c0d6" 00:07:48.587 ], 00:07:48.587 "product_name": "Malloc disk", 00:07:48.587 "block_size": 512, 00:07:48.587 "num_blocks": 65536, 00:07:48.587 "uuid": "d84715c4-25f6-42ff-bf38-566d3524c0d6", 00:07:48.587 "assigned_rate_limits": { 00:07:48.587 "rw_ios_per_sec": 0, 00:07:48.587 "rw_mbytes_per_sec": 0, 00:07:48.587 "r_mbytes_per_sec": 0, 00:07:48.587 "w_mbytes_per_sec": 0 00:07:48.587 }, 00:07:48.587 "claimed": true, 00:07:48.587 "claim_type": "exclusive_write", 00:07:48.587 "zoned": false, 00:07:48.587 "supported_io_types": { 00:07:48.587 "read": true, 00:07:48.587 "write": true, 00:07:48.587 "unmap": true, 00:07:48.587 "flush": true, 00:07:48.587 "reset": true, 00:07:48.587 "nvme_admin": false, 00:07:48.587 "nvme_io": false, 00:07:48.587 "nvme_io_md": false, 00:07:48.587 "write_zeroes": true, 00:07:48.587 "zcopy": true, 00:07:48.587 "get_zone_info": false, 00:07:48.587 "zone_management": false, 00:07:48.587 "zone_append": false, 00:07:48.587 "compare": false, 00:07:48.587 "compare_and_write": false, 00:07:48.587 "abort": true, 00:07:48.587 "seek_hole": false, 00:07:48.587 "seek_data": false, 00:07:48.587 "copy": true, 00:07:48.587 "nvme_iov_md": false 00:07:48.587 }, 00:07:48.587 "memory_domains": [ 00:07:48.587 { 00:07:48.587 "dma_device_id": "system", 00:07:48.587 "dma_device_type": 1 00:07:48.587 }, 00:07:48.587 { 00:07:48.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.587 "dma_device_type": 2 00:07:48.587 } 00:07:48.587 ], 00:07:48.587 "driver_specific": {} 00:07:48.587 } 00:07:48.587 ] 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.587 "name": "Existed_Raid", 00:07:48.587 "uuid": "31e0871b-15d3-47d0-a599-1dde7c600b65", 00:07:48.587 "strip_size_kb": 64, 00:07:48.587 "state": "configuring", 00:07:48.587 "raid_level": "concat", 00:07:48.587 "superblock": true, 00:07:48.587 "num_base_bdevs": 2, 00:07:48.587 "num_base_bdevs_discovered": 1, 00:07:48.587 "num_base_bdevs_operational": 2, 00:07:48.587 "base_bdevs_list": [ 00:07:48.587 { 00:07:48.587 "name": "BaseBdev1", 00:07:48.587 "uuid": "d84715c4-25f6-42ff-bf38-566d3524c0d6", 00:07:48.587 "is_configured": true, 00:07:48.587 "data_offset": 2048, 00:07:48.587 "data_size": 63488 00:07:48.587 }, 00:07:48.587 { 00:07:48.587 "name": "BaseBdev2", 00:07:48.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.587 "is_configured": false, 00:07:48.587 "data_offset": 0, 00:07:48.587 "data_size": 0 00:07:48.587 } 00:07:48.587 ] 00:07:48.587 }' 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.587 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 [2024-11-06 12:38:37.557991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.154 [2024-11-06 12:38:37.558086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 [2024-11-06 12:38:37.566004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.154 [2024-11-06 12:38:37.568563] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.154 [2024-11-06 12:38:37.568675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.154 "name": "Existed_Raid", 00:07:49.154 "uuid": "8ee0dfa2-311b-4b32-b3d5-a603ca167d75", 00:07:49.154 "strip_size_kb": 64, 00:07:49.154 "state": "configuring", 00:07:49.154 "raid_level": "concat", 00:07:49.154 "superblock": true, 00:07:49.154 "num_base_bdevs": 2, 00:07:49.154 "num_base_bdevs_discovered": 1, 00:07:49.154 "num_base_bdevs_operational": 2, 00:07:49.154 "base_bdevs_list": [ 00:07:49.154 { 00:07:49.154 "name": "BaseBdev1", 00:07:49.154 "uuid": "d84715c4-25f6-42ff-bf38-566d3524c0d6", 00:07:49.154 "is_configured": true, 00:07:49.154 "data_offset": 2048, 00:07:49.154 "data_size": 63488 00:07:49.154 }, 00:07:49.154 { 00:07:49.154 "name": "BaseBdev2", 00:07:49.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.154 "is_configured": false, 00:07:49.154 "data_offset": 0, 00:07:49.154 "data_size": 0 00:07:49.154 } 00:07:49.154 ] 00:07:49.154 }' 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.154 12:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.721 [2024-11-06 12:38:38.128952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.721 [2024-11-06 12:38:38.129260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.721 [2024-11-06 12:38:38.129280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:49.721 [2024-11-06 12:38:38.129668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.721 [2024-11-06 12:38:38.129880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.721 [2024-11-06 12:38:38.129903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:49.721 BaseBdev2 00:07:49.721 [2024-11-06 12:38:38.130074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.721 [ 00:07:49.721 { 00:07:49.721 "name": "BaseBdev2", 00:07:49.721 "aliases": [ 00:07:49.721 "1c7c6a40-0090-418a-bd53-6f241b36e84c" 00:07:49.721 ], 00:07:49.721 "product_name": "Malloc disk", 00:07:49.721 "block_size": 512, 00:07:49.721 "num_blocks": 65536, 00:07:49.721 "uuid": "1c7c6a40-0090-418a-bd53-6f241b36e84c", 00:07:49.721 "assigned_rate_limits": { 00:07:49.721 "rw_ios_per_sec": 0, 00:07:49.721 "rw_mbytes_per_sec": 0, 00:07:49.721 "r_mbytes_per_sec": 0, 00:07:49.721 "w_mbytes_per_sec": 0 00:07:49.721 }, 00:07:49.721 "claimed": true, 00:07:49.721 "claim_type": "exclusive_write", 00:07:49.721 "zoned": false, 00:07:49.721 "supported_io_types": { 00:07:49.721 "read": true, 00:07:49.721 "write": true, 00:07:49.721 "unmap": true, 00:07:49.721 "flush": true, 00:07:49.721 "reset": true, 00:07:49.721 "nvme_admin": false, 00:07:49.721 "nvme_io": false, 00:07:49.721 "nvme_io_md": false, 00:07:49.721 "write_zeroes": true, 00:07:49.721 "zcopy": true, 00:07:49.721 "get_zone_info": false, 00:07:49.721 "zone_management": false, 00:07:49.721 "zone_append": false, 00:07:49.721 "compare": false, 00:07:49.721 "compare_and_write": false, 00:07:49.721 "abort": true, 00:07:49.721 "seek_hole": false, 00:07:49.721 "seek_data": false, 00:07:49.721 "copy": true, 00:07:49.721 "nvme_iov_md": false 00:07:49.721 }, 00:07:49.721 "memory_domains": [ 00:07:49.721 { 00:07:49.721 "dma_device_id": "system", 00:07:49.721 "dma_device_type": 1 00:07:49.721 }, 00:07:49.721 { 00:07:49.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.721 "dma_device_type": 2 00:07:49.721 } 00:07:49.721 ], 00:07:49.721 "driver_specific": {} 00:07:49.721 } 00:07:49.721 ] 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.721 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.722 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.722 "name": "Existed_Raid", 00:07:49.722 "uuid": "8ee0dfa2-311b-4b32-b3d5-a603ca167d75", 00:07:49.722 "strip_size_kb": 64, 00:07:49.722 "state": "online", 00:07:49.722 "raid_level": "concat", 00:07:49.722 "superblock": true, 00:07:49.722 "num_base_bdevs": 2, 00:07:49.722 "num_base_bdevs_discovered": 2, 00:07:49.722 "num_base_bdevs_operational": 2, 00:07:49.722 "base_bdevs_list": [ 00:07:49.722 { 00:07:49.722 "name": "BaseBdev1", 00:07:49.722 "uuid": "d84715c4-25f6-42ff-bf38-566d3524c0d6", 00:07:49.722 "is_configured": true, 00:07:49.722 "data_offset": 2048, 00:07:49.722 "data_size": 63488 00:07:49.722 }, 00:07:49.722 { 00:07:49.722 "name": "BaseBdev2", 00:07:49.722 "uuid": "1c7c6a40-0090-418a-bd53-6f241b36e84c", 00:07:49.722 "is_configured": true, 00:07:49.722 "data_offset": 2048, 00:07:49.722 "data_size": 63488 00:07:49.722 } 00:07:49.722 ] 00:07:49.722 }' 00:07:49.722 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.722 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.289 [2024-11-06 12:38:38.689579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.289 "name": "Existed_Raid", 00:07:50.289 "aliases": [ 00:07:50.289 "8ee0dfa2-311b-4b32-b3d5-a603ca167d75" 00:07:50.289 ], 00:07:50.289 "product_name": "Raid Volume", 00:07:50.289 "block_size": 512, 00:07:50.289 "num_blocks": 126976, 00:07:50.289 "uuid": "8ee0dfa2-311b-4b32-b3d5-a603ca167d75", 00:07:50.289 "assigned_rate_limits": { 00:07:50.289 "rw_ios_per_sec": 0, 00:07:50.289 "rw_mbytes_per_sec": 0, 00:07:50.289 "r_mbytes_per_sec": 0, 00:07:50.289 "w_mbytes_per_sec": 0 00:07:50.289 }, 00:07:50.289 "claimed": false, 00:07:50.289 "zoned": false, 00:07:50.289 "supported_io_types": { 00:07:50.289 "read": true, 00:07:50.289 "write": true, 00:07:50.289 "unmap": true, 00:07:50.289 "flush": true, 00:07:50.289 "reset": true, 00:07:50.289 "nvme_admin": false, 00:07:50.289 "nvme_io": false, 00:07:50.289 "nvme_io_md": false, 00:07:50.289 "write_zeroes": true, 00:07:50.289 "zcopy": false, 00:07:50.289 "get_zone_info": false, 00:07:50.289 "zone_management": false, 00:07:50.289 "zone_append": false, 00:07:50.289 "compare": false, 00:07:50.289 "compare_and_write": false, 00:07:50.289 "abort": false, 00:07:50.289 "seek_hole": false, 00:07:50.289 "seek_data": false, 00:07:50.289 "copy": false, 00:07:50.289 "nvme_iov_md": false 00:07:50.289 }, 00:07:50.289 "memory_domains": [ 00:07:50.289 { 00:07:50.289 "dma_device_id": "system", 00:07:50.289 "dma_device_type": 1 00:07:50.289 }, 00:07:50.289 { 00:07:50.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.289 "dma_device_type": 2 00:07:50.289 }, 00:07:50.289 { 00:07:50.289 "dma_device_id": "system", 00:07:50.289 "dma_device_type": 1 00:07:50.289 }, 00:07:50.289 { 00:07:50.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.289 "dma_device_type": 2 00:07:50.289 } 00:07:50.289 ], 00:07:50.289 "driver_specific": { 00:07:50.289 "raid": { 00:07:50.289 "uuid": "8ee0dfa2-311b-4b32-b3d5-a603ca167d75", 00:07:50.289 "strip_size_kb": 64, 00:07:50.289 "state": "online", 00:07:50.289 "raid_level": "concat", 00:07:50.289 "superblock": true, 00:07:50.289 "num_base_bdevs": 2, 00:07:50.289 "num_base_bdevs_discovered": 2, 00:07:50.289 "num_base_bdevs_operational": 2, 00:07:50.289 "base_bdevs_list": [ 00:07:50.289 { 00:07:50.289 "name": "BaseBdev1", 00:07:50.289 "uuid": "d84715c4-25f6-42ff-bf38-566d3524c0d6", 00:07:50.289 "is_configured": true, 00:07:50.289 "data_offset": 2048, 00:07:50.289 "data_size": 63488 00:07:50.289 }, 00:07:50.289 { 00:07:50.289 "name": "BaseBdev2", 00:07:50.289 "uuid": "1c7c6a40-0090-418a-bd53-6f241b36e84c", 00:07:50.289 "is_configured": true, 00:07:50.289 "data_offset": 2048, 00:07:50.289 "data_size": 63488 00:07:50.289 } 00:07:50.289 ] 00:07:50.289 } 00:07:50.289 } 00:07:50.289 }' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.289 BaseBdev2' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.289 12:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.548 [2024-11-06 12:38:38.945350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.548 [2024-11-06 12:38:38.945397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.548 [2024-11-06 12:38:38.945464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.548 "name": "Existed_Raid", 00:07:50.548 "uuid": "8ee0dfa2-311b-4b32-b3d5-a603ca167d75", 00:07:50.548 "strip_size_kb": 64, 00:07:50.548 "state": "offline", 00:07:50.548 "raid_level": "concat", 00:07:50.548 "superblock": true, 00:07:50.548 "num_base_bdevs": 2, 00:07:50.548 "num_base_bdevs_discovered": 1, 00:07:50.548 "num_base_bdevs_operational": 1, 00:07:50.548 "base_bdevs_list": [ 00:07:50.548 { 00:07:50.548 "name": null, 00:07:50.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.548 "is_configured": false, 00:07:50.548 "data_offset": 0, 00:07:50.548 "data_size": 63488 00:07:50.548 }, 00:07:50.548 { 00:07:50.548 "name": "BaseBdev2", 00:07:50.548 "uuid": "1c7c6a40-0090-418a-bd53-6f241b36e84c", 00:07:50.548 "is_configured": true, 00:07:50.548 "data_offset": 2048, 00:07:50.548 "data_size": 63488 00:07:50.548 } 00:07:50.548 ] 00:07:50.548 }' 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.548 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.164 [2024-11-06 12:38:39.596887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.164 [2024-11-06 12:38:39.596974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61925 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61925 ']' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61925 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61925 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.164 killing process with pid 61925 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61925' 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61925 00:07:51.164 [2024-11-06 12:38:39.770782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.164 12:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61925 00:07:51.164 [2024-11-06 12:38:39.785878] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.538 12:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.538 00:07:52.538 real 0m5.524s 00:07:52.538 user 0m8.369s 00:07:52.538 sys 0m0.784s 00:07:52.538 12:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.538 12:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.538 ************************************ 00:07:52.539 END TEST raid_state_function_test_sb 00:07:52.539 ************************************ 00:07:52.539 12:38:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:52.539 12:38:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:52.539 12:38:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.539 12:38:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.539 ************************************ 00:07:52.539 START TEST raid_superblock_test 00:07:52.539 ************************************ 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62177 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62177 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62177 ']' 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.539 12:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.539 [2024-11-06 12:38:40.978243] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:52.539 [2024-11-06 12:38:40.978400] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62177 ] 00:07:52.539 [2024-11-06 12:38:41.149275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.797 [2024-11-06 12:38:41.278963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.055 [2024-11-06 12:38:41.482273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.055 [2024-11-06 12:38:41.482347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.623 malloc1 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.623 [2024-11-06 12:38:42.071027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.623 [2024-11-06 12:38:42.071104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.623 [2024-11-06 12:38:42.071141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.623 [2024-11-06 12:38:42.071158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.623 [2024-11-06 12:38:42.073957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.623 [2024-11-06 12:38:42.074007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.623 pt1 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.623 malloc2 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.623 [2024-11-06 12:38:42.122916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:53.623 [2024-11-06 12:38:42.122984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.623 [2024-11-06 12:38:42.123020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:53.623 [2024-11-06 12:38:42.123036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.623 [2024-11-06 12:38:42.125731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.623 [2024-11-06 12:38:42.125775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:53.623 pt2 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.623 [2024-11-06 12:38:42.130992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:53.623 [2024-11-06 12:38:42.133409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:53.623 [2024-11-06 12:38:42.133621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:53.623 [2024-11-06 12:38:42.133640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:53.623 [2024-11-06 12:38:42.133959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:53.623 [2024-11-06 12:38:42.134165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:53.623 [2024-11-06 12:38:42.134208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:53.623 [2024-11-06 12:38:42.134393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.623 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.624 "name": "raid_bdev1", 00:07:53.624 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:53.624 "strip_size_kb": 64, 00:07:53.624 "state": "online", 00:07:53.624 "raid_level": "concat", 00:07:53.624 "superblock": true, 00:07:53.624 "num_base_bdevs": 2, 00:07:53.624 "num_base_bdevs_discovered": 2, 00:07:53.624 "num_base_bdevs_operational": 2, 00:07:53.624 "base_bdevs_list": [ 00:07:53.624 { 00:07:53.624 "name": "pt1", 00:07:53.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:53.624 "is_configured": true, 00:07:53.624 "data_offset": 2048, 00:07:53.624 "data_size": 63488 00:07:53.624 }, 00:07:53.624 { 00:07:53.624 "name": "pt2", 00:07:53.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:53.624 "is_configured": true, 00:07:53.624 "data_offset": 2048, 00:07:53.624 "data_size": 63488 00:07:53.624 } 00:07:53.624 ] 00:07:53.624 }' 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.624 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.191 [2024-11-06 12:38:42.671452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.191 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.191 "name": "raid_bdev1", 00:07:54.191 "aliases": [ 00:07:54.191 "a1eaf719-3f52-4359-b234-1fb54b0f0875" 00:07:54.191 ], 00:07:54.191 "product_name": "Raid Volume", 00:07:54.191 "block_size": 512, 00:07:54.191 "num_blocks": 126976, 00:07:54.191 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:54.191 "assigned_rate_limits": { 00:07:54.191 "rw_ios_per_sec": 0, 00:07:54.191 "rw_mbytes_per_sec": 0, 00:07:54.191 "r_mbytes_per_sec": 0, 00:07:54.191 "w_mbytes_per_sec": 0 00:07:54.191 }, 00:07:54.191 "claimed": false, 00:07:54.191 "zoned": false, 00:07:54.191 "supported_io_types": { 00:07:54.191 "read": true, 00:07:54.191 "write": true, 00:07:54.191 "unmap": true, 00:07:54.191 "flush": true, 00:07:54.191 "reset": true, 00:07:54.191 "nvme_admin": false, 00:07:54.191 "nvme_io": false, 00:07:54.191 "nvme_io_md": false, 00:07:54.191 "write_zeroes": true, 00:07:54.191 "zcopy": false, 00:07:54.191 "get_zone_info": false, 00:07:54.191 "zone_management": false, 00:07:54.191 "zone_append": false, 00:07:54.191 "compare": false, 00:07:54.191 "compare_and_write": false, 00:07:54.191 "abort": false, 00:07:54.191 "seek_hole": false, 00:07:54.191 "seek_data": false, 00:07:54.191 "copy": false, 00:07:54.191 "nvme_iov_md": false 00:07:54.191 }, 00:07:54.191 "memory_domains": [ 00:07:54.191 { 00:07:54.191 "dma_device_id": "system", 00:07:54.191 "dma_device_type": 1 00:07:54.192 }, 00:07:54.192 { 00:07:54.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.192 "dma_device_type": 2 00:07:54.192 }, 00:07:54.192 { 00:07:54.192 "dma_device_id": "system", 00:07:54.192 "dma_device_type": 1 00:07:54.192 }, 00:07:54.192 { 00:07:54.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.192 "dma_device_type": 2 00:07:54.192 } 00:07:54.192 ], 00:07:54.192 "driver_specific": { 00:07:54.192 "raid": { 00:07:54.192 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:54.192 "strip_size_kb": 64, 00:07:54.192 "state": "online", 00:07:54.192 "raid_level": "concat", 00:07:54.192 "superblock": true, 00:07:54.192 "num_base_bdevs": 2, 00:07:54.192 "num_base_bdevs_discovered": 2, 00:07:54.192 "num_base_bdevs_operational": 2, 00:07:54.192 "base_bdevs_list": [ 00:07:54.192 { 00:07:54.192 "name": "pt1", 00:07:54.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.192 "is_configured": true, 00:07:54.192 "data_offset": 2048, 00:07:54.192 "data_size": 63488 00:07:54.192 }, 00:07:54.192 { 00:07:54.192 "name": "pt2", 00:07:54.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.192 "is_configured": true, 00:07:54.192 "data_offset": 2048, 00:07:54.192 "data_size": 63488 00:07:54.192 } 00:07:54.192 ] 00:07:54.192 } 00:07:54.192 } 00:07:54.192 }' 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.192 pt2' 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.192 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.451 [2024-11-06 12:38:42.963516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.451 12:38:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a1eaf719-3f52-4359-b234-1fb54b0f0875 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a1eaf719-3f52-4359-b234-1fb54b0f0875 ']' 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.451 [2024-11-06 12:38:43.027137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.451 [2024-11-06 12:38:43.027178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.451 [2024-11-06 12:38:43.027300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.451 [2024-11-06 12:38:43.027365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.451 [2024-11-06 12:38:43.027385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.451 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.452 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.711 [2024-11-06 12:38:43.163231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.711 [2024-11-06 12:38:43.165744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.711 [2024-11-06 12:38:43.165854] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:54.711 [2024-11-06 12:38:43.165930] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:54.711 [2024-11-06 12:38:43.165958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.711 [2024-11-06 12:38:43.165974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:54.711 request: 00:07:54.711 { 00:07:54.711 "name": "raid_bdev1", 00:07:54.711 "raid_level": "concat", 00:07:54.711 "base_bdevs": [ 00:07:54.711 "malloc1", 00:07:54.711 "malloc2" 00:07:54.711 ], 00:07:54.711 "strip_size_kb": 64, 00:07:54.711 "superblock": false, 00:07:54.711 "method": "bdev_raid_create", 00:07:54.711 "req_id": 1 00:07:54.711 } 00:07:54.711 Got JSON-RPC error response 00:07:54.711 response: 00:07:54.711 { 00:07:54.711 "code": -17, 00:07:54.711 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.711 } 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.711 [2024-11-06 12:38:43.227227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.711 [2024-11-06 12:38:43.227301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.711 [2024-11-06 12:38:43.227334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:54.711 [2024-11-06 12:38:43.227353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.711 [2024-11-06 12:38:43.230315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.711 [2024-11-06 12:38:43.230365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.711 [2024-11-06 12:38:43.230470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:54.711 [2024-11-06 12:38:43.230552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.711 pt1 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.711 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.712 "name": "raid_bdev1", 00:07:54.712 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:54.712 "strip_size_kb": 64, 00:07:54.712 "state": "configuring", 00:07:54.712 "raid_level": "concat", 00:07:54.712 "superblock": true, 00:07:54.712 "num_base_bdevs": 2, 00:07:54.712 "num_base_bdevs_discovered": 1, 00:07:54.712 "num_base_bdevs_operational": 2, 00:07:54.712 "base_bdevs_list": [ 00:07:54.712 { 00:07:54.712 "name": "pt1", 00:07:54.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.712 "is_configured": true, 00:07:54.712 "data_offset": 2048, 00:07:54.712 "data_size": 63488 00:07:54.712 }, 00:07:54.712 { 00:07:54.712 "name": null, 00:07:54.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.712 "is_configured": false, 00:07:54.712 "data_offset": 2048, 00:07:54.712 "data_size": 63488 00:07:54.712 } 00:07:54.712 ] 00:07:54.712 }' 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.712 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.278 [2024-11-06 12:38:43.755398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.278 [2024-11-06 12:38:43.755487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.278 [2024-11-06 12:38:43.755522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.278 [2024-11-06 12:38:43.755542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.278 [2024-11-06 12:38:43.756125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.278 [2024-11-06 12:38:43.756171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.278 [2024-11-06 12:38:43.756288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.278 [2024-11-06 12:38:43.756328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.278 [2024-11-06 12:38:43.756469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.278 [2024-11-06 12:38:43.756498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.278 [2024-11-06 12:38:43.756789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:55.278 [2024-11-06 12:38:43.756994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.278 [2024-11-06 12:38:43.757017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.278 [2024-11-06 12:38:43.757218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.278 pt2 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.278 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.278 "name": "raid_bdev1", 00:07:55.278 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:55.278 "strip_size_kb": 64, 00:07:55.278 "state": "online", 00:07:55.278 "raid_level": "concat", 00:07:55.278 "superblock": true, 00:07:55.278 "num_base_bdevs": 2, 00:07:55.279 "num_base_bdevs_discovered": 2, 00:07:55.279 "num_base_bdevs_operational": 2, 00:07:55.279 "base_bdevs_list": [ 00:07:55.279 { 00:07:55.279 "name": "pt1", 00:07:55.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.279 "is_configured": true, 00:07:55.279 "data_offset": 2048, 00:07:55.279 "data_size": 63488 00:07:55.279 }, 00:07:55.279 { 00:07:55.279 "name": "pt2", 00:07:55.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.279 "is_configured": true, 00:07:55.279 "data_offset": 2048, 00:07:55.279 "data_size": 63488 00:07:55.279 } 00:07:55.279 ] 00:07:55.279 }' 00:07:55.279 12:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.279 12:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.846 [2024-11-06 12:38:44.291867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.846 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.846 "name": "raid_bdev1", 00:07:55.846 "aliases": [ 00:07:55.846 "a1eaf719-3f52-4359-b234-1fb54b0f0875" 00:07:55.846 ], 00:07:55.846 "product_name": "Raid Volume", 00:07:55.846 "block_size": 512, 00:07:55.846 "num_blocks": 126976, 00:07:55.846 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:55.846 "assigned_rate_limits": { 00:07:55.846 "rw_ios_per_sec": 0, 00:07:55.846 "rw_mbytes_per_sec": 0, 00:07:55.846 "r_mbytes_per_sec": 0, 00:07:55.846 "w_mbytes_per_sec": 0 00:07:55.846 }, 00:07:55.846 "claimed": false, 00:07:55.846 "zoned": false, 00:07:55.846 "supported_io_types": { 00:07:55.846 "read": true, 00:07:55.846 "write": true, 00:07:55.846 "unmap": true, 00:07:55.846 "flush": true, 00:07:55.846 "reset": true, 00:07:55.846 "nvme_admin": false, 00:07:55.846 "nvme_io": false, 00:07:55.846 "nvme_io_md": false, 00:07:55.846 "write_zeroes": true, 00:07:55.846 "zcopy": false, 00:07:55.846 "get_zone_info": false, 00:07:55.846 "zone_management": false, 00:07:55.846 "zone_append": false, 00:07:55.846 "compare": false, 00:07:55.846 "compare_and_write": false, 00:07:55.846 "abort": false, 00:07:55.846 "seek_hole": false, 00:07:55.847 "seek_data": false, 00:07:55.847 "copy": false, 00:07:55.847 "nvme_iov_md": false 00:07:55.847 }, 00:07:55.847 "memory_domains": [ 00:07:55.847 { 00:07:55.847 "dma_device_id": "system", 00:07:55.847 "dma_device_type": 1 00:07:55.847 }, 00:07:55.847 { 00:07:55.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.847 "dma_device_type": 2 00:07:55.847 }, 00:07:55.847 { 00:07:55.847 "dma_device_id": "system", 00:07:55.847 "dma_device_type": 1 00:07:55.847 }, 00:07:55.847 { 00:07:55.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.847 "dma_device_type": 2 00:07:55.847 } 00:07:55.847 ], 00:07:55.847 "driver_specific": { 00:07:55.847 "raid": { 00:07:55.847 "uuid": "a1eaf719-3f52-4359-b234-1fb54b0f0875", 00:07:55.847 "strip_size_kb": 64, 00:07:55.847 "state": "online", 00:07:55.847 "raid_level": "concat", 00:07:55.847 "superblock": true, 00:07:55.847 "num_base_bdevs": 2, 00:07:55.847 "num_base_bdevs_discovered": 2, 00:07:55.847 "num_base_bdevs_operational": 2, 00:07:55.847 "base_bdevs_list": [ 00:07:55.847 { 00:07:55.847 "name": "pt1", 00:07:55.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.847 "is_configured": true, 00:07:55.847 "data_offset": 2048, 00:07:55.847 "data_size": 63488 00:07:55.847 }, 00:07:55.847 { 00:07:55.847 "name": "pt2", 00:07:55.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.847 "is_configured": true, 00:07:55.847 "data_offset": 2048, 00:07:55.847 "data_size": 63488 00:07:55.847 } 00:07:55.847 ] 00:07:55.847 } 00:07:55.847 } 00:07:55.847 }' 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.847 pt2' 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.847 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.105 [2024-11-06 12:38:44.579900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a1eaf719-3f52-4359-b234-1fb54b0f0875 '!=' a1eaf719-3f52-4359-b234-1fb54b0f0875 ']' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62177 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62177 ']' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62177 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62177 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.105 killing process with pid 62177 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62177' 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62177 00:07:56.105 [2024-11-06 12:38:44.654869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.105 [2024-11-06 12:38:44.654994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.105 12:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62177 00:07:56.105 [2024-11-06 12:38:44.655061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.105 [2024-11-06 12:38:44.655081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.363 [2024-11-06 12:38:44.842668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.335 12:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:57.335 00:07:57.335 real 0m4.983s 00:07:57.335 user 0m7.390s 00:07:57.335 sys 0m0.750s 00:07:57.335 12:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.335 12:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.335 ************************************ 00:07:57.335 END TEST raid_superblock_test 00:07:57.335 ************************************ 00:07:57.335 12:38:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:57.335 12:38:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:57.335 12:38:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.335 12:38:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.335 ************************************ 00:07:57.335 START TEST raid_read_error_test 00:07:57.335 ************************************ 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.335 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g5IpQzV7mO 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62394 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62394 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62394 ']' 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.336 12:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.594 [2024-11-06 12:38:46.059552] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:07:57.594 [2024-11-06 12:38:46.059731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62394 ] 00:07:57.594 [2024-11-06 12:38:46.244706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.852 [2024-11-06 12:38:46.403602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.110 [2024-11-06 12:38:46.609829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.110 [2024-11-06 12:38:46.609875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.368 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.368 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:58.368 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.368 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.368 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.368 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 BaseBdev1_malloc 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 true 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 [2024-11-06 12:38:47.057777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.627 [2024-11-06 12:38:47.057845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.627 [2024-11-06 12:38:47.057874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.627 [2024-11-06 12:38:47.057893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.627 [2024-11-06 12:38:47.060683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.627 [2024-11-06 12:38:47.060733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.627 BaseBdev1 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 BaseBdev2_malloc 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 true 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 [2024-11-06 12:38:47.117522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.627 [2024-11-06 12:38:47.117596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.627 [2024-11-06 12:38:47.117622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.627 [2024-11-06 12:38:47.117641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.627 [2024-11-06 12:38:47.120426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.627 [2024-11-06 12:38:47.120477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.627 BaseBdev2 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 [2024-11-06 12:38:47.125632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.627 [2024-11-06 12:38:47.128158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.627 [2024-11-06 12:38:47.128447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.627 [2024-11-06 12:38:47.128480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.627 [2024-11-06 12:38:47.128805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.627 [2024-11-06 12:38:47.129037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.627 [2024-11-06 12:38:47.129066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.627 [2024-11-06 12:38:47.129301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.627 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.628 "name": "raid_bdev1", 00:07:58.628 "uuid": "a1afed02-5ae8-4d68-872c-74f8dc1354c5", 00:07:58.628 "strip_size_kb": 64, 00:07:58.628 "state": "online", 00:07:58.628 "raid_level": "concat", 00:07:58.628 "superblock": true, 00:07:58.628 "num_base_bdevs": 2, 00:07:58.628 "num_base_bdevs_discovered": 2, 00:07:58.628 "num_base_bdevs_operational": 2, 00:07:58.628 "base_bdevs_list": [ 00:07:58.628 { 00:07:58.628 "name": "BaseBdev1", 00:07:58.628 "uuid": "123a754f-e60d-546a-af6e-8b150dc0efda", 00:07:58.628 "is_configured": true, 00:07:58.628 "data_offset": 2048, 00:07:58.628 "data_size": 63488 00:07:58.628 }, 00:07:58.628 { 00:07:58.628 "name": "BaseBdev2", 00:07:58.628 "uuid": "94188c3a-9083-5d62-b83d-10a6c317351f", 00:07:58.628 "is_configured": true, 00:07:58.628 "data_offset": 2048, 00:07:58.628 "data_size": 63488 00:07:58.628 } 00:07:58.628 ] 00:07:58.628 }' 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.628 12:38:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.194 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.194 12:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.194 [2024-11-06 12:38:47.775139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.154 "name": "raid_bdev1", 00:08:00.154 "uuid": "a1afed02-5ae8-4d68-872c-74f8dc1354c5", 00:08:00.154 "strip_size_kb": 64, 00:08:00.154 "state": "online", 00:08:00.154 "raid_level": "concat", 00:08:00.154 "superblock": true, 00:08:00.154 "num_base_bdevs": 2, 00:08:00.154 "num_base_bdevs_discovered": 2, 00:08:00.154 "num_base_bdevs_operational": 2, 00:08:00.154 "base_bdevs_list": [ 00:08:00.154 { 00:08:00.154 "name": "BaseBdev1", 00:08:00.154 "uuid": "123a754f-e60d-546a-af6e-8b150dc0efda", 00:08:00.154 "is_configured": true, 00:08:00.154 "data_offset": 2048, 00:08:00.154 "data_size": 63488 00:08:00.154 }, 00:08:00.154 { 00:08:00.154 "name": "BaseBdev2", 00:08:00.154 "uuid": "94188c3a-9083-5d62-b83d-10a6c317351f", 00:08:00.154 "is_configured": true, 00:08:00.154 "data_offset": 2048, 00:08:00.154 "data_size": 63488 00:08:00.154 } 00:08:00.154 ] 00:08:00.154 }' 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.154 12:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.721 12:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.721 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.721 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.721 [2024-11-06 12:38:49.141016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.722 [2024-11-06 12:38:49.141063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.722 [2024-11-06 12:38:49.144411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.722 [2024-11-06 12:38:49.144474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.722 [2024-11-06 12:38:49.144518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.722 [2024-11-06 12:38:49.144537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.722 { 00:08:00.722 "results": [ 00:08:00.722 { 00:08:00.722 "job": "raid_bdev1", 00:08:00.722 "core_mask": "0x1", 00:08:00.722 "workload": "randrw", 00:08:00.722 "percentage": 50, 00:08:00.722 "status": "finished", 00:08:00.722 "queue_depth": 1, 00:08:00.722 "io_size": 131072, 00:08:00.722 "runtime": 1.363507, 00:08:00.722 "iops": 10940.9045938158, 00:08:00.722 "mibps": 1367.613074226975, 00:08:00.722 "io_failed": 1, 00:08:00.722 "io_timeout": 0, 00:08:00.722 "avg_latency_us": 127.66012052964797, 00:08:00.722 "min_latency_us": 42.123636363636365, 00:08:00.722 "max_latency_us": 1921.3963636363637 00:08:00.722 } 00:08:00.722 ], 00:08:00.722 "core_count": 1 00:08:00.722 } 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62394 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62394 ']' 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62394 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62394 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.722 killing process with pid 62394 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62394' 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62394 00:08:00.722 12:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62394 00:08:00.722 [2024-11-06 12:38:49.176656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.722 [2024-11-06 12:38:49.302859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g5IpQzV7mO 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:02.099 00:08:02.099 real 0m4.473s 00:08:02.099 user 0m5.584s 00:08:02.099 sys 0m0.550s 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.099 12:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.099 ************************************ 00:08:02.099 END TEST raid_read_error_test 00:08:02.099 ************************************ 00:08:02.099 12:38:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:02.099 12:38:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:02.099 12:38:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.099 12:38:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.099 ************************************ 00:08:02.099 START TEST raid_write_error_test 00:08:02.099 ************************************ 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kA4eQwf5ki 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62540 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62540 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62540 ']' 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.099 12:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.099 [2024-11-06 12:38:50.560058] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:02.099 [2024-11-06 12:38:50.560269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62540 ] 00:08:02.099 [2024-11-06 12:38:50.750710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.357 [2024-11-06 12:38:50.904644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.615 [2024-11-06 12:38:51.114982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.615 [2024-11-06 12:38:51.115059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 BaseBdev1_malloc 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 true 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 [2024-11-06 12:38:51.658405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:03.230 [2024-11-06 12:38:51.658475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.230 [2024-11-06 12:38:51.658505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:03.230 [2024-11-06 12:38:51.658522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.230 [2024-11-06 12:38:51.661325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.230 [2024-11-06 12:38:51.661378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:03.230 BaseBdev1 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 BaseBdev2_malloc 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 true 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 [2024-11-06 12:38:51.714251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:03.230 [2024-11-06 12:38:51.714318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.230 [2024-11-06 12:38:51.714341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:03.230 [2024-11-06 12:38:51.714358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.230 [2024-11-06 12:38:51.717097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.230 [2024-11-06 12:38:51.717151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:03.230 BaseBdev2 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.230 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.230 [2024-11-06 12:38:51.722319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.231 [2024-11-06 12:38:51.724722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.231 [2024-11-06 12:38:51.724981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.231 [2024-11-06 12:38:51.725005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:03.231 [2024-11-06 12:38:51.725308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:03.231 [2024-11-06 12:38:51.725533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.231 [2024-11-06 12:38:51.725575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:03.231 [2024-11-06 12:38:51.725768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.231 "name": "raid_bdev1", 00:08:03.231 "uuid": "ad40ee9f-a4ac-47a0-8267-8dea0e496ed4", 00:08:03.231 "strip_size_kb": 64, 00:08:03.231 "state": "online", 00:08:03.231 "raid_level": "concat", 00:08:03.231 "superblock": true, 00:08:03.231 "num_base_bdevs": 2, 00:08:03.231 "num_base_bdevs_discovered": 2, 00:08:03.231 "num_base_bdevs_operational": 2, 00:08:03.231 "base_bdevs_list": [ 00:08:03.231 { 00:08:03.231 "name": "BaseBdev1", 00:08:03.231 "uuid": "642d52c3-d096-52ea-8524-963144c0931f", 00:08:03.231 "is_configured": true, 00:08:03.231 "data_offset": 2048, 00:08:03.231 "data_size": 63488 00:08:03.231 }, 00:08:03.231 { 00:08:03.231 "name": "BaseBdev2", 00:08:03.231 "uuid": "32024175-904d-5418-ad14-e87902b17c84", 00:08:03.231 "is_configured": true, 00:08:03.231 "data_offset": 2048, 00:08:03.231 "data_size": 63488 00:08:03.231 } 00:08:03.231 ] 00:08:03.231 }' 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.231 12:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.797 12:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.797 12:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.797 [2024-11-06 12:38:52.311870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.733 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.734 "name": "raid_bdev1", 00:08:04.734 "uuid": "ad40ee9f-a4ac-47a0-8267-8dea0e496ed4", 00:08:04.734 "strip_size_kb": 64, 00:08:04.734 "state": "online", 00:08:04.734 "raid_level": "concat", 00:08:04.734 "superblock": true, 00:08:04.734 "num_base_bdevs": 2, 00:08:04.734 "num_base_bdevs_discovered": 2, 00:08:04.734 "num_base_bdevs_operational": 2, 00:08:04.734 "base_bdevs_list": [ 00:08:04.734 { 00:08:04.734 "name": "BaseBdev1", 00:08:04.734 "uuid": "642d52c3-d096-52ea-8524-963144c0931f", 00:08:04.734 "is_configured": true, 00:08:04.734 "data_offset": 2048, 00:08:04.734 "data_size": 63488 00:08:04.734 }, 00:08:04.734 { 00:08:04.734 "name": "BaseBdev2", 00:08:04.734 "uuid": "32024175-904d-5418-ad14-e87902b17c84", 00:08:04.734 "is_configured": true, 00:08:04.734 "data_offset": 2048, 00:08:04.734 "data_size": 63488 00:08:04.734 } 00:08:04.734 ] 00:08:04.734 }' 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.734 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.301 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.301 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.301 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.301 [2024-11-06 12:38:53.734267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.301 [2024-11-06 12:38:53.734311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.301 [2024-11-06 12:38:53.737629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.302 [2024-11-06 12:38:53.737694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.302 [2024-11-06 12:38:53.737738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.302 [2024-11-06 12:38:53.737759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:05.302 { 00:08:05.302 "results": [ 00:08:05.302 { 00:08:05.302 "job": "raid_bdev1", 00:08:05.302 "core_mask": "0x1", 00:08:05.302 "workload": "randrw", 00:08:05.302 "percentage": 50, 00:08:05.302 "status": "finished", 00:08:05.302 "queue_depth": 1, 00:08:05.302 "io_size": 131072, 00:08:05.302 "runtime": 1.420051, 00:08:05.302 "iops": 10801.020526727561, 00:08:05.302 "mibps": 1350.1275658409452, 00:08:05.302 "io_failed": 1, 00:08:05.302 "io_timeout": 0, 00:08:05.302 "avg_latency_us": 128.94438964256292, 00:08:05.302 "min_latency_us": 42.123636363636365, 00:08:05.302 "max_latency_us": 1861.8181818181818 00:08:05.302 } 00:08:05.302 ], 00:08:05.302 "core_count": 1 00:08:05.302 } 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62540 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62540 ']' 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62540 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62540 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.302 killing process with pid 62540 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62540' 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62540 00:08:05.302 12:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62540 00:08:05.302 [2024-11-06 12:38:53.770604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.302 [2024-11-06 12:38:53.893213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kA4eQwf5ki 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:06.712 00:08:06.712 real 0m4.557s 00:08:06.712 user 0m5.710s 00:08:06.712 sys 0m0.556s 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.712 ************************************ 00:08:06.712 END TEST raid_write_error_test 00:08:06.712 ************************************ 00:08:06.712 12:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.712 12:38:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:06.712 12:38:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:06.712 12:38:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:06.712 12:38:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.712 12:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.712 ************************************ 00:08:06.712 START TEST raid_state_function_test 00:08:06.712 ************************************ 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62678 00:08:06.712 Process raid pid: 62678 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62678' 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62678 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62678 ']' 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.712 12:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.712 [2024-11-06 12:38:55.179732] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:06.712 [2024-11-06 12:38:55.179929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.712 [2024-11-06 12:38:55.363134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.971 [2024-11-06 12:38:55.493201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.240 [2024-11-06 12:38:55.701326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.240 [2024-11-06 12:38:55.701401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.498 [2024-11-06 12:38:56.101993] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.498 [2024-11-06 12:38:56.102059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.498 [2024-11-06 12:38:56.102076] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.498 [2024-11-06 12:38:56.102093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.498 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.499 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.499 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.499 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.757 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.757 "name": "Existed_Raid", 00:08:07.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.757 "strip_size_kb": 0, 00:08:07.757 "state": "configuring", 00:08:07.757 "raid_level": "raid1", 00:08:07.757 "superblock": false, 00:08:07.757 "num_base_bdevs": 2, 00:08:07.757 "num_base_bdevs_discovered": 0, 00:08:07.757 "num_base_bdevs_operational": 2, 00:08:07.757 "base_bdevs_list": [ 00:08:07.757 { 00:08:07.757 "name": "BaseBdev1", 00:08:07.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.757 "is_configured": false, 00:08:07.757 "data_offset": 0, 00:08:07.757 "data_size": 0 00:08:07.757 }, 00:08:07.757 { 00:08:07.757 "name": "BaseBdev2", 00:08:07.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.757 "is_configured": false, 00:08:07.757 "data_offset": 0, 00:08:07.757 "data_size": 0 00:08:07.757 } 00:08:07.757 ] 00:08:07.757 }' 00:08:07.757 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.757 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 [2024-11-06 12:38:56.606083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.016 [2024-11-06 12:38:56.606128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 [2024-11-06 12:38:56.614054] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.016 [2024-11-06 12:38:56.614108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.016 [2024-11-06 12:38:56.614123] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.016 [2024-11-06 12:38:56.614142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 [2024-11-06 12:38:56.658659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.016 BaseBdev1 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.274 [ 00:08:08.274 { 00:08:08.274 "name": "BaseBdev1", 00:08:08.274 "aliases": [ 00:08:08.274 "46aac3c2-2f8d-48c2-b8f0-1ee986168b35" 00:08:08.274 ], 00:08:08.274 "product_name": "Malloc disk", 00:08:08.274 "block_size": 512, 00:08:08.274 "num_blocks": 65536, 00:08:08.274 "uuid": "46aac3c2-2f8d-48c2-b8f0-1ee986168b35", 00:08:08.274 "assigned_rate_limits": { 00:08:08.274 "rw_ios_per_sec": 0, 00:08:08.274 "rw_mbytes_per_sec": 0, 00:08:08.274 "r_mbytes_per_sec": 0, 00:08:08.274 "w_mbytes_per_sec": 0 00:08:08.274 }, 00:08:08.274 "claimed": true, 00:08:08.274 "claim_type": "exclusive_write", 00:08:08.274 "zoned": false, 00:08:08.274 "supported_io_types": { 00:08:08.274 "read": true, 00:08:08.274 "write": true, 00:08:08.274 "unmap": true, 00:08:08.274 "flush": true, 00:08:08.274 "reset": true, 00:08:08.274 "nvme_admin": false, 00:08:08.274 "nvme_io": false, 00:08:08.274 "nvme_io_md": false, 00:08:08.274 "write_zeroes": true, 00:08:08.274 "zcopy": true, 00:08:08.274 "get_zone_info": false, 00:08:08.274 "zone_management": false, 00:08:08.274 "zone_append": false, 00:08:08.274 "compare": false, 00:08:08.274 "compare_and_write": false, 00:08:08.274 "abort": true, 00:08:08.274 "seek_hole": false, 00:08:08.274 "seek_data": false, 00:08:08.274 "copy": true, 00:08:08.274 "nvme_iov_md": false 00:08:08.274 }, 00:08:08.274 "memory_domains": [ 00:08:08.274 { 00:08:08.274 "dma_device_id": "system", 00:08:08.274 "dma_device_type": 1 00:08:08.274 }, 00:08:08.274 { 00:08:08.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.274 "dma_device_type": 2 00:08:08.274 } 00:08:08.274 ], 00:08:08.274 "driver_specific": {} 00:08:08.274 } 00:08:08.274 ] 00:08:08.274 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.274 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:08.274 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.275 "name": "Existed_Raid", 00:08:08.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.275 "strip_size_kb": 0, 00:08:08.275 "state": "configuring", 00:08:08.275 "raid_level": "raid1", 00:08:08.275 "superblock": false, 00:08:08.275 "num_base_bdevs": 2, 00:08:08.275 "num_base_bdevs_discovered": 1, 00:08:08.275 "num_base_bdevs_operational": 2, 00:08:08.275 "base_bdevs_list": [ 00:08:08.275 { 00:08:08.275 "name": "BaseBdev1", 00:08:08.275 "uuid": "46aac3c2-2f8d-48c2-b8f0-1ee986168b35", 00:08:08.275 "is_configured": true, 00:08:08.275 "data_offset": 0, 00:08:08.275 "data_size": 65536 00:08:08.275 }, 00:08:08.275 { 00:08:08.275 "name": "BaseBdev2", 00:08:08.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.275 "is_configured": false, 00:08:08.275 "data_offset": 0, 00:08:08.275 "data_size": 0 00:08:08.275 } 00:08:08.275 ] 00:08:08.275 }' 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.275 12:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 [2024-11-06 12:38:57.166833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.533 [2024-11-06 12:38:57.166896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 [2024-11-06 12:38:57.174870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.533 [2024-11-06 12:38:57.177308] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.533 [2024-11-06 12:38:57.177362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.533 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.792 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.792 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.792 "name": "Existed_Raid", 00:08:08.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.792 "strip_size_kb": 0, 00:08:08.792 "state": "configuring", 00:08:08.792 "raid_level": "raid1", 00:08:08.792 "superblock": false, 00:08:08.792 "num_base_bdevs": 2, 00:08:08.792 "num_base_bdevs_discovered": 1, 00:08:08.792 "num_base_bdevs_operational": 2, 00:08:08.792 "base_bdevs_list": [ 00:08:08.792 { 00:08:08.792 "name": "BaseBdev1", 00:08:08.792 "uuid": "46aac3c2-2f8d-48c2-b8f0-1ee986168b35", 00:08:08.792 "is_configured": true, 00:08:08.792 "data_offset": 0, 00:08:08.792 "data_size": 65536 00:08:08.792 }, 00:08:08.792 { 00:08:08.792 "name": "BaseBdev2", 00:08:08.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.792 "is_configured": false, 00:08:08.792 "data_offset": 0, 00:08:08.792 "data_size": 0 00:08:08.792 } 00:08:08.792 ] 00:08:08.792 }' 00:08:08.792 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.792 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.050 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.050 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.050 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.308 [2024-11-06 12:38:57.733011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.308 [2024-11-06 12:38:57.733099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.308 [2024-11-06 12:38:57.733113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:09.308 [2024-11-06 12:38:57.733489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.308 [2024-11-06 12:38:57.733715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.308 [2024-11-06 12:38:57.733748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.308 [2024-11-06 12:38:57.734076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.308 BaseBdev2 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.308 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.308 [ 00:08:09.308 { 00:08:09.308 "name": "BaseBdev2", 00:08:09.308 "aliases": [ 00:08:09.308 "946e7cc3-7239-4305-aae6-08c0608c9d4c" 00:08:09.308 ], 00:08:09.308 "product_name": "Malloc disk", 00:08:09.308 "block_size": 512, 00:08:09.308 "num_blocks": 65536, 00:08:09.308 "uuid": "946e7cc3-7239-4305-aae6-08c0608c9d4c", 00:08:09.308 "assigned_rate_limits": { 00:08:09.308 "rw_ios_per_sec": 0, 00:08:09.308 "rw_mbytes_per_sec": 0, 00:08:09.308 "r_mbytes_per_sec": 0, 00:08:09.308 "w_mbytes_per_sec": 0 00:08:09.308 }, 00:08:09.308 "claimed": true, 00:08:09.308 "claim_type": "exclusive_write", 00:08:09.308 "zoned": false, 00:08:09.308 "supported_io_types": { 00:08:09.308 "read": true, 00:08:09.308 "write": true, 00:08:09.308 "unmap": true, 00:08:09.309 "flush": true, 00:08:09.309 "reset": true, 00:08:09.309 "nvme_admin": false, 00:08:09.309 "nvme_io": false, 00:08:09.309 "nvme_io_md": false, 00:08:09.309 "write_zeroes": true, 00:08:09.309 "zcopy": true, 00:08:09.309 "get_zone_info": false, 00:08:09.309 "zone_management": false, 00:08:09.309 "zone_append": false, 00:08:09.309 "compare": false, 00:08:09.309 "compare_and_write": false, 00:08:09.309 "abort": true, 00:08:09.309 "seek_hole": false, 00:08:09.309 "seek_data": false, 00:08:09.309 "copy": true, 00:08:09.309 "nvme_iov_md": false 00:08:09.309 }, 00:08:09.309 "memory_domains": [ 00:08:09.309 { 00:08:09.309 "dma_device_id": "system", 00:08:09.309 "dma_device_type": 1 00:08:09.309 }, 00:08:09.309 { 00:08:09.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.309 "dma_device_type": 2 00:08:09.309 } 00:08:09.309 ], 00:08:09.309 "driver_specific": {} 00:08:09.309 } 00:08:09.309 ] 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.309 "name": "Existed_Raid", 00:08:09.309 "uuid": "dbcc51e0-55c5-4000-9cb3-09ecd1cc7eb7", 00:08:09.309 "strip_size_kb": 0, 00:08:09.309 "state": "online", 00:08:09.309 "raid_level": "raid1", 00:08:09.309 "superblock": false, 00:08:09.309 "num_base_bdevs": 2, 00:08:09.309 "num_base_bdevs_discovered": 2, 00:08:09.309 "num_base_bdevs_operational": 2, 00:08:09.309 "base_bdevs_list": [ 00:08:09.309 { 00:08:09.309 "name": "BaseBdev1", 00:08:09.309 "uuid": "46aac3c2-2f8d-48c2-b8f0-1ee986168b35", 00:08:09.309 "is_configured": true, 00:08:09.309 "data_offset": 0, 00:08:09.309 "data_size": 65536 00:08:09.309 }, 00:08:09.309 { 00:08:09.309 "name": "BaseBdev2", 00:08:09.309 "uuid": "946e7cc3-7239-4305-aae6-08c0608c9d4c", 00:08:09.309 "is_configured": true, 00:08:09.309 "data_offset": 0, 00:08:09.309 "data_size": 65536 00:08:09.309 } 00:08:09.309 ] 00:08:09.309 }' 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.309 12:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 [2024-11-06 12:38:58.253567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.876 "name": "Existed_Raid", 00:08:09.876 "aliases": [ 00:08:09.876 "dbcc51e0-55c5-4000-9cb3-09ecd1cc7eb7" 00:08:09.876 ], 00:08:09.876 "product_name": "Raid Volume", 00:08:09.876 "block_size": 512, 00:08:09.876 "num_blocks": 65536, 00:08:09.876 "uuid": "dbcc51e0-55c5-4000-9cb3-09ecd1cc7eb7", 00:08:09.876 "assigned_rate_limits": { 00:08:09.876 "rw_ios_per_sec": 0, 00:08:09.876 "rw_mbytes_per_sec": 0, 00:08:09.876 "r_mbytes_per_sec": 0, 00:08:09.876 "w_mbytes_per_sec": 0 00:08:09.876 }, 00:08:09.876 "claimed": false, 00:08:09.876 "zoned": false, 00:08:09.876 "supported_io_types": { 00:08:09.876 "read": true, 00:08:09.876 "write": true, 00:08:09.876 "unmap": false, 00:08:09.876 "flush": false, 00:08:09.876 "reset": true, 00:08:09.876 "nvme_admin": false, 00:08:09.876 "nvme_io": false, 00:08:09.876 "nvme_io_md": false, 00:08:09.876 "write_zeroes": true, 00:08:09.876 "zcopy": false, 00:08:09.876 "get_zone_info": false, 00:08:09.876 "zone_management": false, 00:08:09.876 "zone_append": false, 00:08:09.876 "compare": false, 00:08:09.876 "compare_and_write": false, 00:08:09.876 "abort": false, 00:08:09.876 "seek_hole": false, 00:08:09.876 "seek_data": false, 00:08:09.876 "copy": false, 00:08:09.876 "nvme_iov_md": false 00:08:09.876 }, 00:08:09.876 "memory_domains": [ 00:08:09.876 { 00:08:09.876 "dma_device_id": "system", 00:08:09.876 "dma_device_type": 1 00:08:09.876 }, 00:08:09.876 { 00:08:09.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.876 "dma_device_type": 2 00:08:09.876 }, 00:08:09.876 { 00:08:09.876 "dma_device_id": "system", 00:08:09.876 "dma_device_type": 1 00:08:09.876 }, 00:08:09.876 { 00:08:09.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.876 "dma_device_type": 2 00:08:09.876 } 00:08:09.876 ], 00:08:09.876 "driver_specific": { 00:08:09.876 "raid": { 00:08:09.876 "uuid": "dbcc51e0-55c5-4000-9cb3-09ecd1cc7eb7", 00:08:09.876 "strip_size_kb": 0, 00:08:09.876 "state": "online", 00:08:09.876 "raid_level": "raid1", 00:08:09.876 "superblock": false, 00:08:09.876 "num_base_bdevs": 2, 00:08:09.876 "num_base_bdevs_discovered": 2, 00:08:09.876 "num_base_bdevs_operational": 2, 00:08:09.876 "base_bdevs_list": [ 00:08:09.876 { 00:08:09.876 "name": "BaseBdev1", 00:08:09.876 "uuid": "46aac3c2-2f8d-48c2-b8f0-1ee986168b35", 00:08:09.876 "is_configured": true, 00:08:09.876 "data_offset": 0, 00:08:09.876 "data_size": 65536 00:08:09.876 }, 00:08:09.876 { 00:08:09.876 "name": "BaseBdev2", 00:08:09.876 "uuid": "946e7cc3-7239-4305-aae6-08c0608c9d4c", 00:08:09.876 "is_configured": true, 00:08:09.876 "data_offset": 0, 00:08:09.876 "data_size": 65536 00:08:09.876 } 00:08:09.876 ] 00:08:09.876 } 00:08:09.876 } 00:08:09.876 }' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.876 BaseBdev2' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.876 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.876 [2024-11-06 12:38:58.525847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.135 "name": "Existed_Raid", 00:08:10.135 "uuid": "dbcc51e0-55c5-4000-9cb3-09ecd1cc7eb7", 00:08:10.135 "strip_size_kb": 0, 00:08:10.135 "state": "online", 00:08:10.135 "raid_level": "raid1", 00:08:10.135 "superblock": false, 00:08:10.135 "num_base_bdevs": 2, 00:08:10.135 "num_base_bdevs_discovered": 1, 00:08:10.135 "num_base_bdevs_operational": 1, 00:08:10.135 "base_bdevs_list": [ 00:08:10.135 { 00:08:10.135 "name": null, 00:08:10.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.135 "is_configured": false, 00:08:10.135 "data_offset": 0, 00:08:10.135 "data_size": 65536 00:08:10.135 }, 00:08:10.135 { 00:08:10.135 "name": "BaseBdev2", 00:08:10.135 "uuid": "946e7cc3-7239-4305-aae6-08c0608c9d4c", 00:08:10.135 "is_configured": true, 00:08:10.135 "data_offset": 0, 00:08:10.135 "data_size": 65536 00:08:10.135 } 00:08:10.135 ] 00:08:10.135 }' 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.135 12:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.751 [2024-11-06 12:38:59.184219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.751 [2024-11-06 12:38:59.184346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.751 [2024-11-06 12:38:59.271431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.751 [2024-11-06 12:38:59.271513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.751 [2024-11-06 12:38:59.271533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62678 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62678 ']' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62678 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62678 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:10.751 killing process with pid 62678 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62678' 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62678 00:08:10.751 [2024-11-06 12:38:59.368702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.751 12:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62678 00:08:10.751 [2024-11-06 12:38:59.383401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.128 00:08:12.128 real 0m5.346s 00:08:12.128 user 0m8.056s 00:08:12.128 sys 0m0.774s 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.128 ************************************ 00:08:12.128 END TEST raid_state_function_test 00:08:12.128 ************************************ 00:08:12.128 12:39:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:12.128 12:39:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:12.128 12:39:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.128 12:39:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.128 ************************************ 00:08:12.128 START TEST raid_state_function_test_sb 00:08:12.128 ************************************ 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62935 00:08:12.128 Process raid pid: 62935 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62935' 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62935 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62935 ']' 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.128 12:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.128 [2024-11-06 12:39:00.584659] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:12.128 [2024-11-06 12:39:00.584847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.128 [2024-11-06 12:39:00.774445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.387 [2024-11-06 12:39:00.907063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.714 [2024-11-06 12:39:01.116065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.714 [2024-11-06 12:39:01.116142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.984 [2024-11-06 12:39:01.595720] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.984 [2024-11-06 12:39:01.595795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.984 [2024-11-06 12:39:01.595817] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.984 [2024-11-06 12:39:01.595837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.984 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.242 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.242 "name": "Existed_Raid", 00:08:13.242 "uuid": "83e44bea-d376-4585-aedb-426192a4f538", 00:08:13.242 "strip_size_kb": 0, 00:08:13.242 "state": "configuring", 00:08:13.242 "raid_level": "raid1", 00:08:13.242 "superblock": true, 00:08:13.243 "num_base_bdevs": 2, 00:08:13.243 "num_base_bdevs_discovered": 0, 00:08:13.243 "num_base_bdevs_operational": 2, 00:08:13.243 "base_bdevs_list": [ 00:08:13.243 { 00:08:13.243 "name": "BaseBdev1", 00:08:13.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.243 "is_configured": false, 00:08:13.243 "data_offset": 0, 00:08:13.243 "data_size": 0 00:08:13.243 }, 00:08:13.243 { 00:08:13.243 "name": "BaseBdev2", 00:08:13.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.243 "is_configured": false, 00:08:13.243 "data_offset": 0, 00:08:13.243 "data_size": 0 00:08:13.243 } 00:08:13.243 ] 00:08:13.243 }' 00:08:13.243 12:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.243 12:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.501 [2024-11-06 12:39:02.111788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.501 [2024-11-06 12:39:02.111843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.501 [2024-11-06 12:39:02.119765] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.501 [2024-11-06 12:39:02.119835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.501 [2024-11-06 12:39:02.119854] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.501 [2024-11-06 12:39:02.119877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.501 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.760 [2024-11-06 12:39:02.164862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.760 BaseBdev1 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.760 [ 00:08:13.760 { 00:08:13.760 "name": "BaseBdev1", 00:08:13.760 "aliases": [ 00:08:13.760 "79a89817-f7c3-4b2f-acd1-253b7b10ea29" 00:08:13.760 ], 00:08:13.760 "product_name": "Malloc disk", 00:08:13.760 "block_size": 512, 00:08:13.760 "num_blocks": 65536, 00:08:13.760 "uuid": "79a89817-f7c3-4b2f-acd1-253b7b10ea29", 00:08:13.760 "assigned_rate_limits": { 00:08:13.760 "rw_ios_per_sec": 0, 00:08:13.760 "rw_mbytes_per_sec": 0, 00:08:13.760 "r_mbytes_per_sec": 0, 00:08:13.760 "w_mbytes_per_sec": 0 00:08:13.760 }, 00:08:13.760 "claimed": true, 00:08:13.760 "claim_type": "exclusive_write", 00:08:13.760 "zoned": false, 00:08:13.760 "supported_io_types": { 00:08:13.760 "read": true, 00:08:13.760 "write": true, 00:08:13.760 "unmap": true, 00:08:13.760 "flush": true, 00:08:13.760 "reset": true, 00:08:13.760 "nvme_admin": false, 00:08:13.760 "nvme_io": false, 00:08:13.760 "nvme_io_md": false, 00:08:13.760 "write_zeroes": true, 00:08:13.760 "zcopy": true, 00:08:13.760 "get_zone_info": false, 00:08:13.760 "zone_management": false, 00:08:13.760 "zone_append": false, 00:08:13.760 "compare": false, 00:08:13.760 "compare_and_write": false, 00:08:13.760 "abort": true, 00:08:13.760 "seek_hole": false, 00:08:13.760 "seek_data": false, 00:08:13.760 "copy": true, 00:08:13.760 "nvme_iov_md": false 00:08:13.760 }, 00:08:13.760 "memory_domains": [ 00:08:13.760 { 00:08:13.760 "dma_device_id": "system", 00:08:13.760 "dma_device_type": 1 00:08:13.760 }, 00:08:13.760 { 00:08:13.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.760 "dma_device_type": 2 00:08:13.760 } 00:08:13.760 ], 00:08:13.760 "driver_specific": {} 00:08:13.760 } 00:08:13.760 ] 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.760 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.761 "name": "Existed_Raid", 00:08:13.761 "uuid": "11d96137-2362-4478-a43b-46022d66a0b9", 00:08:13.761 "strip_size_kb": 0, 00:08:13.761 "state": "configuring", 00:08:13.761 "raid_level": "raid1", 00:08:13.761 "superblock": true, 00:08:13.761 "num_base_bdevs": 2, 00:08:13.761 "num_base_bdevs_discovered": 1, 00:08:13.761 "num_base_bdevs_operational": 2, 00:08:13.761 "base_bdevs_list": [ 00:08:13.761 { 00:08:13.761 "name": "BaseBdev1", 00:08:13.761 "uuid": "79a89817-f7c3-4b2f-acd1-253b7b10ea29", 00:08:13.761 "is_configured": true, 00:08:13.761 "data_offset": 2048, 00:08:13.761 "data_size": 63488 00:08:13.761 }, 00:08:13.761 { 00:08:13.761 "name": "BaseBdev2", 00:08:13.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.761 "is_configured": false, 00:08:13.761 "data_offset": 0, 00:08:13.761 "data_size": 0 00:08:13.761 } 00:08:13.761 ] 00:08:13.761 }' 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.761 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.328 [2024-11-06 12:39:02.705069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.328 [2024-11-06 12:39:02.705141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.328 [2024-11-06 12:39:02.713149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.328 [2024-11-06 12:39:02.715643] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.328 [2024-11-06 12:39:02.715710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.328 "name": "Existed_Raid", 00:08:14.328 "uuid": "4be1a9af-852a-4e61-be37-f07ef4978920", 00:08:14.328 "strip_size_kb": 0, 00:08:14.328 "state": "configuring", 00:08:14.328 "raid_level": "raid1", 00:08:14.328 "superblock": true, 00:08:14.328 "num_base_bdevs": 2, 00:08:14.328 "num_base_bdevs_discovered": 1, 00:08:14.328 "num_base_bdevs_operational": 2, 00:08:14.328 "base_bdevs_list": [ 00:08:14.328 { 00:08:14.328 "name": "BaseBdev1", 00:08:14.328 "uuid": "79a89817-f7c3-4b2f-acd1-253b7b10ea29", 00:08:14.328 "is_configured": true, 00:08:14.328 "data_offset": 2048, 00:08:14.328 "data_size": 63488 00:08:14.328 }, 00:08:14.328 { 00:08:14.328 "name": "BaseBdev2", 00:08:14.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.328 "is_configured": false, 00:08:14.328 "data_offset": 0, 00:08:14.328 "data_size": 0 00:08:14.328 } 00:08:14.328 ] 00:08:14.328 }' 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.328 12:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.587 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.587 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.587 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.587 [2024-11-06 12:39:03.219865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.588 [2024-11-06 12:39:03.220262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.588 [2024-11-06 12:39:03.220285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.588 [2024-11-06 12:39:03.220631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.588 BaseBdev2 00:08:14.588 [2024-11-06 12:39:03.220868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.588 [2024-11-06 12:39:03.220906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:14.588 [2024-11-06 12:39:03.221098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.588 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.588 [ 00:08:14.588 { 00:08:14.588 "name": "BaseBdev2", 00:08:14.588 "aliases": [ 00:08:14.588 "7439ad1e-9220-42ba-9e31-ca1e800069fd" 00:08:14.588 ], 00:08:14.588 "product_name": "Malloc disk", 00:08:14.588 "block_size": 512, 00:08:14.588 "num_blocks": 65536, 00:08:14.852 "uuid": "7439ad1e-9220-42ba-9e31-ca1e800069fd", 00:08:14.852 "assigned_rate_limits": { 00:08:14.852 "rw_ios_per_sec": 0, 00:08:14.852 "rw_mbytes_per_sec": 0, 00:08:14.852 "r_mbytes_per_sec": 0, 00:08:14.852 "w_mbytes_per_sec": 0 00:08:14.852 }, 00:08:14.852 "claimed": true, 00:08:14.852 "claim_type": "exclusive_write", 00:08:14.852 "zoned": false, 00:08:14.852 "supported_io_types": { 00:08:14.852 "read": true, 00:08:14.852 "write": true, 00:08:14.852 "unmap": true, 00:08:14.852 "flush": true, 00:08:14.852 "reset": true, 00:08:14.852 "nvme_admin": false, 00:08:14.852 "nvme_io": false, 00:08:14.852 "nvme_io_md": false, 00:08:14.852 "write_zeroes": true, 00:08:14.852 "zcopy": true, 00:08:14.852 "get_zone_info": false, 00:08:14.852 "zone_management": false, 00:08:14.852 "zone_append": false, 00:08:14.852 "compare": false, 00:08:14.852 "compare_and_write": false, 00:08:14.852 "abort": true, 00:08:14.852 "seek_hole": false, 00:08:14.852 "seek_data": false, 00:08:14.852 "copy": true, 00:08:14.852 "nvme_iov_md": false 00:08:14.852 }, 00:08:14.852 "memory_domains": [ 00:08:14.852 { 00:08:14.852 "dma_device_id": "system", 00:08:14.852 "dma_device_type": 1 00:08:14.852 }, 00:08:14.852 { 00:08:14.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.852 "dma_device_type": 2 00:08:14.852 } 00:08:14.852 ], 00:08:14.852 "driver_specific": {} 00:08:14.852 } 00:08:14.852 ] 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.852 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.852 "name": "Existed_Raid", 00:08:14.852 "uuid": "4be1a9af-852a-4e61-be37-f07ef4978920", 00:08:14.852 "strip_size_kb": 0, 00:08:14.852 "state": "online", 00:08:14.852 "raid_level": "raid1", 00:08:14.852 "superblock": true, 00:08:14.852 "num_base_bdevs": 2, 00:08:14.852 "num_base_bdevs_discovered": 2, 00:08:14.852 "num_base_bdevs_operational": 2, 00:08:14.853 "base_bdevs_list": [ 00:08:14.853 { 00:08:14.853 "name": "BaseBdev1", 00:08:14.853 "uuid": "79a89817-f7c3-4b2f-acd1-253b7b10ea29", 00:08:14.853 "is_configured": true, 00:08:14.853 "data_offset": 2048, 00:08:14.853 "data_size": 63488 00:08:14.853 }, 00:08:14.853 { 00:08:14.853 "name": "BaseBdev2", 00:08:14.853 "uuid": "7439ad1e-9220-42ba-9e31-ca1e800069fd", 00:08:14.853 "is_configured": true, 00:08:14.853 "data_offset": 2048, 00:08:14.853 "data_size": 63488 00:08:14.853 } 00:08:14.853 ] 00:08:14.853 }' 00:08:14.853 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.853 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.125 [2024-11-06 12:39:03.700431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.125 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.125 "name": "Existed_Raid", 00:08:15.125 "aliases": [ 00:08:15.125 "4be1a9af-852a-4e61-be37-f07ef4978920" 00:08:15.125 ], 00:08:15.125 "product_name": "Raid Volume", 00:08:15.125 "block_size": 512, 00:08:15.125 "num_blocks": 63488, 00:08:15.125 "uuid": "4be1a9af-852a-4e61-be37-f07ef4978920", 00:08:15.125 "assigned_rate_limits": { 00:08:15.125 "rw_ios_per_sec": 0, 00:08:15.125 "rw_mbytes_per_sec": 0, 00:08:15.125 "r_mbytes_per_sec": 0, 00:08:15.125 "w_mbytes_per_sec": 0 00:08:15.125 }, 00:08:15.125 "claimed": false, 00:08:15.125 "zoned": false, 00:08:15.125 "supported_io_types": { 00:08:15.125 "read": true, 00:08:15.125 "write": true, 00:08:15.125 "unmap": false, 00:08:15.125 "flush": false, 00:08:15.125 "reset": true, 00:08:15.125 "nvme_admin": false, 00:08:15.125 "nvme_io": false, 00:08:15.125 "nvme_io_md": false, 00:08:15.125 "write_zeroes": true, 00:08:15.125 "zcopy": false, 00:08:15.125 "get_zone_info": false, 00:08:15.125 "zone_management": false, 00:08:15.125 "zone_append": false, 00:08:15.125 "compare": false, 00:08:15.125 "compare_and_write": false, 00:08:15.125 "abort": false, 00:08:15.125 "seek_hole": false, 00:08:15.125 "seek_data": false, 00:08:15.125 "copy": false, 00:08:15.125 "nvme_iov_md": false 00:08:15.125 }, 00:08:15.125 "memory_domains": [ 00:08:15.125 { 00:08:15.125 "dma_device_id": "system", 00:08:15.125 "dma_device_type": 1 00:08:15.125 }, 00:08:15.125 { 00:08:15.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.125 "dma_device_type": 2 00:08:15.125 }, 00:08:15.125 { 00:08:15.125 "dma_device_id": "system", 00:08:15.125 "dma_device_type": 1 00:08:15.125 }, 00:08:15.125 { 00:08:15.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.125 "dma_device_type": 2 00:08:15.125 } 00:08:15.125 ], 00:08:15.125 "driver_specific": { 00:08:15.125 "raid": { 00:08:15.125 "uuid": "4be1a9af-852a-4e61-be37-f07ef4978920", 00:08:15.125 "strip_size_kb": 0, 00:08:15.125 "state": "online", 00:08:15.125 "raid_level": "raid1", 00:08:15.126 "superblock": true, 00:08:15.126 "num_base_bdevs": 2, 00:08:15.126 "num_base_bdevs_discovered": 2, 00:08:15.126 "num_base_bdevs_operational": 2, 00:08:15.126 "base_bdevs_list": [ 00:08:15.126 { 00:08:15.126 "name": "BaseBdev1", 00:08:15.126 "uuid": "79a89817-f7c3-4b2f-acd1-253b7b10ea29", 00:08:15.126 "is_configured": true, 00:08:15.126 "data_offset": 2048, 00:08:15.126 "data_size": 63488 00:08:15.126 }, 00:08:15.126 { 00:08:15.126 "name": "BaseBdev2", 00:08:15.126 "uuid": "7439ad1e-9220-42ba-9e31-ca1e800069fd", 00:08:15.126 "is_configured": true, 00:08:15.126 "data_offset": 2048, 00:08:15.126 "data_size": 63488 00:08:15.126 } 00:08:15.126 ] 00:08:15.126 } 00:08:15.126 } 00:08:15.126 }' 00:08:15.126 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:15.386 BaseBdev2' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.386 12:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.386 [2024-11-06 12:39:03.944225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.386 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.645 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.645 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.645 "name": "Existed_Raid", 00:08:15.645 "uuid": "4be1a9af-852a-4e61-be37-f07ef4978920", 00:08:15.645 "strip_size_kb": 0, 00:08:15.645 "state": "online", 00:08:15.645 "raid_level": "raid1", 00:08:15.645 "superblock": true, 00:08:15.645 "num_base_bdevs": 2, 00:08:15.645 "num_base_bdevs_discovered": 1, 00:08:15.645 "num_base_bdevs_operational": 1, 00:08:15.645 "base_bdevs_list": [ 00:08:15.645 { 00:08:15.645 "name": null, 00:08:15.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.645 "is_configured": false, 00:08:15.645 "data_offset": 0, 00:08:15.645 "data_size": 63488 00:08:15.645 }, 00:08:15.645 { 00:08:15.645 "name": "BaseBdev2", 00:08:15.645 "uuid": "7439ad1e-9220-42ba-9e31-ca1e800069fd", 00:08:15.645 "is_configured": true, 00:08:15.645 "data_offset": 2048, 00:08:15.645 "data_size": 63488 00:08:15.645 } 00:08:15.645 ] 00:08:15.645 }' 00:08:15.645 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.645 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.904 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.163 [2024-11-06 12:39:04.587362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.163 [2024-11-06 12:39:04.587518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.163 [2024-11-06 12:39:04.674919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.163 [2024-11-06 12:39:04.675241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.163 [2024-11-06 12:39:04.675282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62935 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62935 ']' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62935 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62935 00:08:16.163 killing process with pid 62935 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62935' 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62935 00:08:16.163 [2024-11-06 12:39:04.753729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.163 12:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62935 00:08:16.163 [2024-11-06 12:39:04.768726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.156 12:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.156 00:08:17.156 real 0m5.351s 00:08:17.156 user 0m8.050s 00:08:17.156 sys 0m0.746s 00:08:17.156 12:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.156 ************************************ 00:08:17.156 END TEST raid_state_function_test_sb 00:08:17.156 ************************************ 00:08:17.156 12:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.415 12:39:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:17.415 12:39:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:17.415 12:39:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.415 12:39:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.415 ************************************ 00:08:17.415 START TEST raid_superblock_test 00:08:17.415 ************************************ 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63193 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63193 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63193 ']' 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:17.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:17.415 12:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.415 [2024-11-06 12:39:05.967090] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:17.415 [2024-11-06 12:39:05.967660] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63193 ] 00:08:17.674 [2024-11-06 12:39:06.155079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.674 [2024-11-06 12:39:06.285154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.932 [2024-11-06 12:39:06.488021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.932 [2024-11-06 12:39:06.488109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.499 12:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.499 malloc1 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.499 [2024-11-06 12:39:07.026690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.499 [2024-11-06 12:39:07.026935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.499 [2024-11-06 12:39:07.026988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.499 [2024-11-06 12:39:07.027009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.499 [2024-11-06 12:39:07.029889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.499 [2024-11-06 12:39:07.030086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.499 pt1 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.499 malloc2 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.499 [2024-11-06 12:39:07.079257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.499 [2024-11-06 12:39:07.079338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.499 [2024-11-06 12:39:07.079377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.499 [2024-11-06 12:39:07.079408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.499 [2024-11-06 12:39:07.082310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.499 [2024-11-06 12:39:07.082360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.499 pt2 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.499 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.499 [2024-11-06 12:39:07.087328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.499 [2024-11-06 12:39:07.089799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.499 [2024-11-06 12:39:07.090036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.499 [2024-11-06 12:39:07.090062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:18.500 [2024-11-06 12:39:07.090400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.500 [2024-11-06 12:39:07.090624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.500 [2024-11-06 12:39:07.090654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:18.500 [2024-11-06 12:39:07.090850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.500 "name": "raid_bdev1", 00:08:18.500 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:18.500 "strip_size_kb": 0, 00:08:18.500 "state": "online", 00:08:18.500 "raid_level": "raid1", 00:08:18.500 "superblock": true, 00:08:18.500 "num_base_bdevs": 2, 00:08:18.500 "num_base_bdevs_discovered": 2, 00:08:18.500 "num_base_bdevs_operational": 2, 00:08:18.500 "base_bdevs_list": [ 00:08:18.500 { 00:08:18.500 "name": "pt1", 00:08:18.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.500 "is_configured": true, 00:08:18.500 "data_offset": 2048, 00:08:18.500 "data_size": 63488 00:08:18.500 }, 00:08:18.500 { 00:08:18.500 "name": "pt2", 00:08:18.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.500 "is_configured": true, 00:08:18.500 "data_offset": 2048, 00:08:18.500 "data_size": 63488 00:08:18.500 } 00:08:18.500 ] 00:08:18.500 }' 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.500 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.066 [2024-11-06 12:39:07.603787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.066 "name": "raid_bdev1", 00:08:19.066 "aliases": [ 00:08:19.066 "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2" 00:08:19.066 ], 00:08:19.066 "product_name": "Raid Volume", 00:08:19.066 "block_size": 512, 00:08:19.066 "num_blocks": 63488, 00:08:19.066 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:19.066 "assigned_rate_limits": { 00:08:19.066 "rw_ios_per_sec": 0, 00:08:19.066 "rw_mbytes_per_sec": 0, 00:08:19.066 "r_mbytes_per_sec": 0, 00:08:19.066 "w_mbytes_per_sec": 0 00:08:19.066 }, 00:08:19.066 "claimed": false, 00:08:19.066 "zoned": false, 00:08:19.066 "supported_io_types": { 00:08:19.066 "read": true, 00:08:19.066 "write": true, 00:08:19.066 "unmap": false, 00:08:19.066 "flush": false, 00:08:19.066 "reset": true, 00:08:19.066 "nvme_admin": false, 00:08:19.066 "nvme_io": false, 00:08:19.066 "nvme_io_md": false, 00:08:19.066 "write_zeroes": true, 00:08:19.066 "zcopy": false, 00:08:19.066 "get_zone_info": false, 00:08:19.066 "zone_management": false, 00:08:19.066 "zone_append": false, 00:08:19.066 "compare": false, 00:08:19.066 "compare_and_write": false, 00:08:19.066 "abort": false, 00:08:19.066 "seek_hole": false, 00:08:19.066 "seek_data": false, 00:08:19.066 "copy": false, 00:08:19.066 "nvme_iov_md": false 00:08:19.066 }, 00:08:19.066 "memory_domains": [ 00:08:19.066 { 00:08:19.066 "dma_device_id": "system", 00:08:19.066 "dma_device_type": 1 00:08:19.066 }, 00:08:19.066 { 00:08:19.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.066 "dma_device_type": 2 00:08:19.066 }, 00:08:19.066 { 00:08:19.066 "dma_device_id": "system", 00:08:19.066 "dma_device_type": 1 00:08:19.066 }, 00:08:19.066 { 00:08:19.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.066 "dma_device_type": 2 00:08:19.066 } 00:08:19.066 ], 00:08:19.066 "driver_specific": { 00:08:19.066 "raid": { 00:08:19.066 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:19.066 "strip_size_kb": 0, 00:08:19.066 "state": "online", 00:08:19.066 "raid_level": "raid1", 00:08:19.066 "superblock": true, 00:08:19.066 "num_base_bdevs": 2, 00:08:19.066 "num_base_bdevs_discovered": 2, 00:08:19.066 "num_base_bdevs_operational": 2, 00:08:19.066 "base_bdevs_list": [ 00:08:19.066 { 00:08:19.066 "name": "pt1", 00:08:19.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.066 "is_configured": true, 00:08:19.066 "data_offset": 2048, 00:08:19.066 "data_size": 63488 00:08:19.066 }, 00:08:19.066 { 00:08:19.066 "name": "pt2", 00:08:19.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.066 "is_configured": true, 00:08:19.066 "data_offset": 2048, 00:08:19.066 "data_size": 63488 00:08:19.066 } 00:08:19.066 ] 00:08:19.066 } 00:08:19.066 } 00:08:19.066 }' 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.066 pt2' 00:08:19.066 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 [2024-11-06 12:39:07.859854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3d4dd6c5-3740-4743-97cd-f08eea2bdbb2 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3d4dd6c5-3740-4743-97cd-f08eea2bdbb2 ']' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 [2024-11-06 12:39:07.907466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.325 [2024-11-06 12:39:07.907503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.325 [2024-11-06 12:39:07.907629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.325 [2024-11-06 12:39:07.907715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.325 [2024-11-06 12:39:07.907739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.325 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.603 12:39:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.603 [2024-11-06 12:39:08.051557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.603 [2024-11-06 12:39:08.054121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.603 [2024-11-06 12:39:08.054264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.603 [2024-11-06 12:39:08.054368] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.603 [2024-11-06 12:39:08.054405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.603 [2024-11-06 12:39:08.054425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:19.603 request: 00:08:19.603 { 00:08:19.603 "name": "raid_bdev1", 00:08:19.603 "raid_level": "raid1", 00:08:19.603 "base_bdevs": [ 00:08:19.603 "malloc1", 00:08:19.603 "malloc2" 00:08:19.603 ], 00:08:19.603 "superblock": false, 00:08:19.603 "method": "bdev_raid_create", 00:08:19.603 "req_id": 1 00:08:19.603 } 00:08:19.603 Got JSON-RPC error response 00:08:19.603 response: 00:08:19.603 { 00:08:19.603 "code": -17, 00:08:19.603 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.603 } 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:19.603 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.604 [2024-11-06 12:39:08.107561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.604 [2024-11-06 12:39:08.107650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.604 [2024-11-06 12:39:08.107683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:19.604 [2024-11-06 12:39:08.107704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.604 [2024-11-06 12:39:08.110646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.604 [2024-11-06 12:39:08.110703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.604 [2024-11-06 12:39:08.110840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.604 [2024-11-06 12:39:08.110934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.604 pt1 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.604 "name": "raid_bdev1", 00:08:19.604 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:19.604 "strip_size_kb": 0, 00:08:19.604 "state": "configuring", 00:08:19.604 "raid_level": "raid1", 00:08:19.604 "superblock": true, 00:08:19.604 "num_base_bdevs": 2, 00:08:19.604 "num_base_bdevs_discovered": 1, 00:08:19.604 "num_base_bdevs_operational": 2, 00:08:19.604 "base_bdevs_list": [ 00:08:19.604 { 00:08:19.604 "name": "pt1", 00:08:19.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.604 "is_configured": true, 00:08:19.604 "data_offset": 2048, 00:08:19.604 "data_size": 63488 00:08:19.604 }, 00:08:19.604 { 00:08:19.604 "name": null, 00:08:19.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.604 "is_configured": false, 00:08:19.604 "data_offset": 2048, 00:08:19.604 "data_size": 63488 00:08:19.604 } 00:08:19.604 ] 00:08:19.604 }' 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.604 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.170 [2024-11-06 12:39:08.623721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.170 [2024-11-06 12:39:08.623818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.170 [2024-11-06 12:39:08.623853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:20.170 [2024-11-06 12:39:08.623874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.170 [2024-11-06 12:39:08.624509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.170 [2024-11-06 12:39:08.624554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.170 [2024-11-06 12:39:08.624667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.170 [2024-11-06 12:39:08.624711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.170 [2024-11-06 12:39:08.624870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.170 [2024-11-06 12:39:08.624895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:20.170 [2024-11-06 12:39:08.625225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:20.170 [2024-11-06 12:39:08.625455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.170 [2024-11-06 12:39:08.625474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:20.170 [2024-11-06 12:39:08.625680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.170 pt2 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.170 "name": "raid_bdev1", 00:08:20.170 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:20.170 "strip_size_kb": 0, 00:08:20.170 "state": "online", 00:08:20.170 "raid_level": "raid1", 00:08:20.170 "superblock": true, 00:08:20.170 "num_base_bdevs": 2, 00:08:20.170 "num_base_bdevs_discovered": 2, 00:08:20.170 "num_base_bdevs_operational": 2, 00:08:20.170 "base_bdevs_list": [ 00:08:20.170 { 00:08:20.170 "name": "pt1", 00:08:20.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.170 "is_configured": true, 00:08:20.170 "data_offset": 2048, 00:08:20.170 "data_size": 63488 00:08:20.170 }, 00:08:20.170 { 00:08:20.170 "name": "pt2", 00:08:20.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.170 "is_configured": true, 00:08:20.170 "data_offset": 2048, 00:08:20.170 "data_size": 63488 00:08:20.170 } 00:08:20.170 ] 00:08:20.170 }' 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.170 12:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.737 [2024-11-06 12:39:09.156119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.737 "name": "raid_bdev1", 00:08:20.737 "aliases": [ 00:08:20.737 "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2" 00:08:20.737 ], 00:08:20.737 "product_name": "Raid Volume", 00:08:20.737 "block_size": 512, 00:08:20.737 "num_blocks": 63488, 00:08:20.737 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:20.737 "assigned_rate_limits": { 00:08:20.737 "rw_ios_per_sec": 0, 00:08:20.737 "rw_mbytes_per_sec": 0, 00:08:20.737 "r_mbytes_per_sec": 0, 00:08:20.737 "w_mbytes_per_sec": 0 00:08:20.737 }, 00:08:20.737 "claimed": false, 00:08:20.737 "zoned": false, 00:08:20.737 "supported_io_types": { 00:08:20.737 "read": true, 00:08:20.737 "write": true, 00:08:20.737 "unmap": false, 00:08:20.737 "flush": false, 00:08:20.737 "reset": true, 00:08:20.737 "nvme_admin": false, 00:08:20.737 "nvme_io": false, 00:08:20.737 "nvme_io_md": false, 00:08:20.737 "write_zeroes": true, 00:08:20.737 "zcopy": false, 00:08:20.737 "get_zone_info": false, 00:08:20.737 "zone_management": false, 00:08:20.737 "zone_append": false, 00:08:20.737 "compare": false, 00:08:20.737 "compare_and_write": false, 00:08:20.737 "abort": false, 00:08:20.737 "seek_hole": false, 00:08:20.737 "seek_data": false, 00:08:20.737 "copy": false, 00:08:20.737 "nvme_iov_md": false 00:08:20.737 }, 00:08:20.737 "memory_domains": [ 00:08:20.737 { 00:08:20.737 "dma_device_id": "system", 00:08:20.737 "dma_device_type": 1 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.737 "dma_device_type": 2 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "system", 00:08:20.737 "dma_device_type": 1 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.737 "dma_device_type": 2 00:08:20.737 } 00:08:20.737 ], 00:08:20.737 "driver_specific": { 00:08:20.737 "raid": { 00:08:20.737 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:20.737 "strip_size_kb": 0, 00:08:20.737 "state": "online", 00:08:20.737 "raid_level": "raid1", 00:08:20.737 "superblock": true, 00:08:20.737 "num_base_bdevs": 2, 00:08:20.737 "num_base_bdevs_discovered": 2, 00:08:20.737 "num_base_bdevs_operational": 2, 00:08:20.737 "base_bdevs_list": [ 00:08:20.737 { 00:08:20.737 "name": "pt1", 00:08:20.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.737 "is_configured": true, 00:08:20.737 "data_offset": 2048, 00:08:20.737 "data_size": 63488 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "name": "pt2", 00:08:20.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.737 "is_configured": true, 00:08:20.737 "data_offset": 2048, 00:08:20.737 "data_size": 63488 00:08:20.737 } 00:08:20.737 ] 00:08:20.737 } 00:08:20.737 } 00:08:20.737 }' 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.737 pt2' 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.737 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.738 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.997 [2024-11-06 12:39:09.416276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3d4dd6c5-3740-4743-97cd-f08eea2bdbb2 '!=' 3d4dd6c5-3740-4743-97cd-f08eea2bdbb2 ']' 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.997 [2024-11-06 12:39:09.479983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.997 "name": "raid_bdev1", 00:08:20.997 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:20.997 "strip_size_kb": 0, 00:08:20.997 "state": "online", 00:08:20.997 "raid_level": "raid1", 00:08:20.997 "superblock": true, 00:08:20.997 "num_base_bdevs": 2, 00:08:20.997 "num_base_bdevs_discovered": 1, 00:08:20.997 "num_base_bdevs_operational": 1, 00:08:20.997 "base_bdevs_list": [ 00:08:20.997 { 00:08:20.997 "name": null, 00:08:20.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.997 "is_configured": false, 00:08:20.997 "data_offset": 0, 00:08:20.997 "data_size": 63488 00:08:20.997 }, 00:08:20.997 { 00:08:20.997 "name": "pt2", 00:08:20.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.997 "is_configured": true, 00:08:20.997 "data_offset": 2048, 00:08:20.997 "data_size": 63488 00:08:20.997 } 00:08:20.997 ] 00:08:20.997 }' 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.997 12:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.561 [2024-11-06 12:39:10.008050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.561 [2024-11-06 12:39:10.008088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.561 [2024-11-06 12:39:10.008210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.561 [2024-11-06 12:39:10.008287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.561 [2024-11-06 12:39:10.008313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:21.561 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.562 [2024-11-06 12:39:10.084058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.562 [2024-11-06 12:39:10.084140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.562 [2024-11-06 12:39:10.084170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:21.562 [2024-11-06 12:39:10.084213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.562 [2024-11-06 12:39:10.087181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.562 [2024-11-06 12:39:10.087249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.562 [2024-11-06 12:39:10.087366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:21.562 [2024-11-06 12:39:10.087476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.562 [2024-11-06 12:39:10.087618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:21.562 [2024-11-06 12:39:10.087657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.562 [2024-11-06 12:39:10.087957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:21.562 [2024-11-06 12:39:10.088180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:21.562 [2024-11-06 12:39:10.088219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:21.562 [2024-11-06 12:39:10.088466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.562 pt2 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.562 "name": "raid_bdev1", 00:08:21.562 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:21.562 "strip_size_kb": 0, 00:08:21.562 "state": "online", 00:08:21.562 "raid_level": "raid1", 00:08:21.562 "superblock": true, 00:08:21.562 "num_base_bdevs": 2, 00:08:21.562 "num_base_bdevs_discovered": 1, 00:08:21.562 "num_base_bdevs_operational": 1, 00:08:21.562 "base_bdevs_list": [ 00:08:21.562 { 00:08:21.562 "name": null, 00:08:21.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.562 "is_configured": false, 00:08:21.562 "data_offset": 2048, 00:08:21.562 "data_size": 63488 00:08:21.562 }, 00:08:21.562 { 00:08:21.562 "name": "pt2", 00:08:21.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.562 "is_configured": true, 00:08:21.562 "data_offset": 2048, 00:08:21.562 "data_size": 63488 00:08:21.562 } 00:08:21.562 ] 00:08:21.562 }' 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.562 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.138 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.138 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.138 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.138 [2024-11-06 12:39:10.596509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.138 [2024-11-06 12:39:10.596700] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.138 [2024-11-06 12:39:10.596987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.139 [2024-11-06 12:39:10.597223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.139 [2024-11-06 12:39:10.597404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.139 [2024-11-06 12:39:10.652575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:22.139 [2024-11-06 12:39:10.652803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.139 [2024-11-06 12:39:10.652862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:22.139 [2024-11-06 12:39:10.652882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.139 [2024-11-06 12:39:10.655868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.139 [2024-11-06 12:39:10.656048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:22.139 [2024-11-06 12:39:10.656215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:22.139 [2024-11-06 12:39:10.656281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:22.139 [2024-11-06 12:39:10.656473] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:22.139 [2024-11-06 12:39:10.656494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.139 [2024-11-06 12:39:10.656521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:22.139 [2024-11-06 12:39:10.656601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.139 [2024-11-06 12:39:10.656786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:22.139 [2024-11-06 12:39:10.656806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:22.139 pt1 00:08:22.139 [2024-11-06 12:39:10.657143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:22.139 [2024-11-06 12:39:10.657382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:22.139 [2024-11-06 12:39:10.657418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:22.139 [2024-11-06 12:39:10.657618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.139 "name": "raid_bdev1", 00:08:22.139 "uuid": "3d4dd6c5-3740-4743-97cd-f08eea2bdbb2", 00:08:22.139 "strip_size_kb": 0, 00:08:22.139 "state": "online", 00:08:22.139 "raid_level": "raid1", 00:08:22.139 "superblock": true, 00:08:22.139 "num_base_bdevs": 2, 00:08:22.139 "num_base_bdevs_discovered": 1, 00:08:22.139 "num_base_bdevs_operational": 1, 00:08:22.139 "base_bdevs_list": [ 00:08:22.139 { 00:08:22.139 "name": null, 00:08:22.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.139 "is_configured": false, 00:08:22.139 "data_offset": 2048, 00:08:22.139 "data_size": 63488 00:08:22.139 }, 00:08:22.139 { 00:08:22.139 "name": "pt2", 00:08:22.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.139 "is_configured": true, 00:08:22.139 "data_offset": 2048, 00:08:22.139 "data_size": 63488 00:08:22.139 } 00:08:22.139 ] 00:08:22.139 }' 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.139 12:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.707 [2024-11-06 12:39:11.197042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3d4dd6c5-3740-4743-97cd-f08eea2bdbb2 '!=' 3d4dd6c5-3740-4743-97cd-f08eea2bdbb2 ']' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63193 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63193 ']' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63193 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63193 00:08:22.707 killing process with pid 63193 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63193' 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63193 00:08:22.707 12:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63193 00:08:22.707 [2024-11-06 12:39:11.275012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.707 [2024-11-06 12:39:11.275146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.707 [2024-11-06 12:39:11.275245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.707 [2024-11-06 12:39:11.275278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:22.966 [2024-11-06 12:39:11.461689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.037 12:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:24.037 00:08:24.037 real 0m6.642s 00:08:24.037 user 0m10.475s 00:08:24.037 sys 0m0.982s 00:08:24.037 12:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.037 ************************************ 00:08:24.037 END TEST raid_superblock_test 00:08:24.037 ************************************ 00:08:24.037 12:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.037 12:39:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:24.037 12:39:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:24.037 12:39:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.037 12:39:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.037 ************************************ 00:08:24.037 START TEST raid_read_error_test 00:08:24.037 ************************************ 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v3twdaJVnO 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63524 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63524 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63524 ']' 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.037 12:39:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.037 [2024-11-06 12:39:12.665499] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:24.037 [2024-11-06 12:39:12.665650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63524 ] 00:08:24.296 [2024-11-06 12:39:12.846416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.555 [2024-11-06 12:39:12.980315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.555 [2024-11-06 12:39:13.191899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.555 [2024-11-06 12:39:13.192008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.122 BaseBdev1_malloc 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.122 true 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.122 [2024-11-06 12:39:13.722285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.122 [2024-11-06 12:39:13.722390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.122 [2024-11-06 12:39:13.722423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.122 [2024-11-06 12:39:13.722444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.122 [2024-11-06 12:39:13.725651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.122 [2024-11-06 12:39:13.725893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.122 BaseBdev1 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.122 BaseBdev2_malloc 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.122 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.382 true 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.382 [2024-11-06 12:39:13.784668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.382 [2024-11-06 12:39:13.784939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.382 [2024-11-06 12:39:13.784980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.382 [2024-11-06 12:39:13.785005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.382 [2024-11-06 12:39:13.788065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.382 [2024-11-06 12:39:13.788277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.382 BaseBdev2 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.382 [2024-11-06 12:39:13.797017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.382 [2024-11-06 12:39:13.799748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.382 [2024-11-06 12:39:13.800048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.382 [2024-11-06 12:39:13.800076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:25.382 [2024-11-06 12:39:13.800447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:25.382 [2024-11-06 12:39:13.800707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.382 [2024-11-06 12:39:13.800760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.382 [2024-11-06 12:39:13.801012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.382 "name": "raid_bdev1", 00:08:25.382 "uuid": "23faf66a-bc64-4f95-bb7a-aa48ee1677a8", 00:08:25.382 "strip_size_kb": 0, 00:08:25.382 "state": "online", 00:08:25.382 "raid_level": "raid1", 00:08:25.382 "superblock": true, 00:08:25.382 "num_base_bdevs": 2, 00:08:25.382 "num_base_bdevs_discovered": 2, 00:08:25.382 "num_base_bdevs_operational": 2, 00:08:25.382 "base_bdevs_list": [ 00:08:25.382 { 00:08:25.382 "name": "BaseBdev1", 00:08:25.382 "uuid": "2cb5bea3-17f2-5660-a0d0-6e628838f997", 00:08:25.382 "is_configured": true, 00:08:25.382 "data_offset": 2048, 00:08:25.382 "data_size": 63488 00:08:25.382 }, 00:08:25.382 { 00:08:25.382 "name": "BaseBdev2", 00:08:25.382 "uuid": "112b9846-581e-5e14-9e07-a2e33c2c988d", 00:08:25.382 "is_configured": true, 00:08:25.382 "data_offset": 2048, 00:08:25.382 "data_size": 63488 00:08:25.382 } 00:08:25.382 ] 00:08:25.382 }' 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.382 12:39:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.949 12:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:25.949 12:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:25.949 [2024-11-06 12:39:14.458740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.885 "name": "raid_bdev1", 00:08:26.885 "uuid": "23faf66a-bc64-4f95-bb7a-aa48ee1677a8", 00:08:26.885 "strip_size_kb": 0, 00:08:26.885 "state": "online", 00:08:26.885 "raid_level": "raid1", 00:08:26.885 "superblock": true, 00:08:26.885 "num_base_bdevs": 2, 00:08:26.885 "num_base_bdevs_discovered": 2, 00:08:26.885 "num_base_bdevs_operational": 2, 00:08:26.885 "base_bdevs_list": [ 00:08:26.885 { 00:08:26.885 "name": "BaseBdev1", 00:08:26.885 "uuid": "2cb5bea3-17f2-5660-a0d0-6e628838f997", 00:08:26.885 "is_configured": true, 00:08:26.885 "data_offset": 2048, 00:08:26.885 "data_size": 63488 00:08:26.885 }, 00:08:26.885 { 00:08:26.885 "name": "BaseBdev2", 00:08:26.885 "uuid": "112b9846-581e-5e14-9e07-a2e33c2c988d", 00:08:26.885 "is_configured": true, 00:08:26.885 "data_offset": 2048, 00:08:26.885 "data_size": 63488 00:08:26.885 } 00:08:26.885 ] 00:08:26.885 }' 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.885 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.452 [2024-11-06 12:39:15.883017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.452 [2024-11-06 12:39:15.883305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.452 [2024-11-06 12:39:15.886718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.452 { 00:08:27.452 "results": [ 00:08:27.452 { 00:08:27.452 "job": "raid_bdev1", 00:08:27.452 "core_mask": "0x1", 00:08:27.452 "workload": "randrw", 00:08:27.452 "percentage": 50, 00:08:27.452 "status": "finished", 00:08:27.452 "queue_depth": 1, 00:08:27.452 "io_size": 131072, 00:08:27.452 "runtime": 1.42185, 00:08:27.452 "iops": 11266.30797904139, 00:08:27.452 "mibps": 1408.2884973801738, 00:08:27.452 "io_failed": 0, 00:08:27.452 "io_timeout": 0, 00:08:27.452 "avg_latency_us": 83.95147852833851, 00:08:27.452 "min_latency_us": 41.192727272727275, 00:08:27.452 "max_latency_us": 1869.2654545454545 00:08:27.452 } 00:08:27.452 ], 00:08:27.452 "core_count": 1 00:08:27.452 } 00:08:27.452 [2024-11-06 12:39:15.886920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.452 [2024-11-06 12:39:15.887134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.452 [2024-11-06 12:39:15.887164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63524 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63524 ']' 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63524 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63524 00:08:27.452 killing process with pid 63524 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63524' 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63524 00:08:27.452 [2024-11-06 12:39:15.929476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.452 12:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63524 00:08:27.452 [2024-11-06 12:39:16.056661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v3twdaJVnO 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:28.826 ************************************ 00:08:28.826 END TEST raid_read_error_test 00:08:28.826 ************************************ 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:28.826 00:08:28.826 real 0m4.622s 00:08:28.826 user 0m5.798s 00:08:28.826 sys 0m0.589s 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.826 12:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.826 12:39:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:28.826 12:39:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:28.826 12:39:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.826 12:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.826 ************************************ 00:08:28.826 START TEST raid_write_error_test 00:08:28.826 ************************************ 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.826 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:28.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M6o6dz1J9M 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63670 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63670 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63670 ']' 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.827 12:39:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 [2024-11-06 12:39:17.354367] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:28.827 [2024-11-06 12:39:17.354550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63670 ] 00:08:29.085 [2024-11-06 12:39:17.539458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.085 [2024-11-06 12:39:17.668849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.343 [2024-11-06 12:39:17.873288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.343 [2024-11-06 12:39:17.873396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 BaseBdev1_malloc 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 true 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 [2024-11-06 12:39:18.453636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:29.910 [2024-11-06 12:39:18.453712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.910 [2024-11-06 12:39:18.453746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:29.910 [2024-11-06 12:39:18.453767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.910 [2024-11-06 12:39:18.456646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.910 [2024-11-06 12:39:18.456850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:29.910 BaseBdev1 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 BaseBdev2_malloc 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 true 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 [2024-11-06 12:39:18.521972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:29.910 [2024-11-06 12:39:18.522055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.910 [2024-11-06 12:39:18.522085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:29.910 [2024-11-06 12:39:18.522106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.910 [2024-11-06 12:39:18.525075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.910 [2024-11-06 12:39:18.525133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:29.910 BaseBdev2 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.910 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.910 [2024-11-06 12:39:18.530095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.910 [2024-11-06 12:39:18.533608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.910 [2024-11-06 12:39:18.534028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.910 [2024-11-06 12:39:18.534179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.910 [2024-11-06 12:39:18.534633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:29.910 [2024-11-06 12:39:18.535014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.910 [2024-11-06 12:39:18.535148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:29.910 [2024-11-06 12:39:18.535590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.911 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.169 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.169 "name": "raid_bdev1", 00:08:30.169 "uuid": "9c990dd9-eccc-493f-ad48-b064c76db6f2", 00:08:30.169 "strip_size_kb": 0, 00:08:30.169 "state": "online", 00:08:30.169 "raid_level": "raid1", 00:08:30.169 "superblock": true, 00:08:30.169 "num_base_bdevs": 2, 00:08:30.169 "num_base_bdevs_discovered": 2, 00:08:30.169 "num_base_bdevs_operational": 2, 00:08:30.169 "base_bdevs_list": [ 00:08:30.169 { 00:08:30.169 "name": "BaseBdev1", 00:08:30.169 "uuid": "4084590e-9402-56cd-bbce-b74e8f5bad92", 00:08:30.169 "is_configured": true, 00:08:30.169 "data_offset": 2048, 00:08:30.169 "data_size": 63488 00:08:30.169 }, 00:08:30.169 { 00:08:30.169 "name": "BaseBdev2", 00:08:30.169 "uuid": "84b3f76c-0e83-5a2d-8d01-7115659b019c", 00:08:30.169 "is_configured": true, 00:08:30.169 "data_offset": 2048, 00:08:30.169 "data_size": 63488 00:08:30.169 } 00:08:30.169 ] 00:08:30.169 }' 00:08:30.169 12:39:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.169 12:39:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.428 12:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.428 12:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.686 [2024-11-06 12:39:19.167860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 [2024-11-06 12:39:20.028607] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:31.621 [2024-11-06 12:39:20.028685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.621 [2024-11-06 12:39:20.028929] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.621 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.621 "name": "raid_bdev1", 00:08:31.621 "uuid": "9c990dd9-eccc-493f-ad48-b064c76db6f2", 00:08:31.621 "strip_size_kb": 0, 00:08:31.621 "state": "online", 00:08:31.621 "raid_level": "raid1", 00:08:31.622 "superblock": true, 00:08:31.622 "num_base_bdevs": 2, 00:08:31.622 "num_base_bdevs_discovered": 1, 00:08:31.622 "num_base_bdevs_operational": 1, 00:08:31.622 "base_bdevs_list": [ 00:08:31.622 { 00:08:31.622 "name": null, 00:08:31.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.622 "is_configured": false, 00:08:31.622 "data_offset": 0, 00:08:31.622 "data_size": 63488 00:08:31.622 }, 00:08:31.622 { 00:08:31.622 "name": "BaseBdev2", 00:08:31.622 "uuid": "84b3f76c-0e83-5a2d-8d01-7115659b019c", 00:08:31.622 "is_configured": true, 00:08:31.622 "data_offset": 2048, 00:08:31.622 "data_size": 63488 00:08:31.622 } 00:08:31.622 ] 00:08:31.622 }' 00:08:31.622 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.622 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.188 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.188 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.188 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.188 [2024-11-06 12:39:20.560057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.188 [2024-11-06 12:39:20.560096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.188 [2024-11-06 12:39:20.563482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.188 [2024-11-06 12:39:20.563540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.188 [2024-11-06 12:39:20.563631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.188 [2024-11-06 12:39:20.563652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:32.188 { 00:08:32.188 "results": [ 00:08:32.188 { 00:08:32.188 "job": "raid_bdev1", 00:08:32.188 "core_mask": "0x1", 00:08:32.188 "workload": "randrw", 00:08:32.188 "percentage": 50, 00:08:32.188 "status": "finished", 00:08:32.188 "queue_depth": 1, 00:08:32.188 "io_size": 131072, 00:08:32.188 "runtime": 1.389582, 00:08:32.188 "iops": 14027.959487097558, 00:08:32.188 "mibps": 1753.4949358871947, 00:08:32.188 "io_failed": 0, 00:08:32.188 "io_timeout": 0, 00:08:32.188 "avg_latency_us": 66.54858032953555, 00:08:32.188 "min_latency_us": 42.589090909090906, 00:08:32.188 "max_latency_us": 1832.0290909090909 00:08:32.189 } 00:08:32.189 ], 00:08:32.189 "core_count": 1 00:08:32.189 } 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63670 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63670 ']' 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63670 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63670 00:08:32.189 killing process with pid 63670 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63670' 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63670 00:08:32.189 [2024-11-06 12:39:20.596916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.189 12:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63670 00:08:32.189 [2024-11-06 12:39:20.722315] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.565 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M6o6dz1J9M 00:08:33.565 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.565 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.565 ************************************ 00:08:33.566 END TEST raid_write_error_test 00:08:33.566 ************************************ 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:33.566 00:08:33.566 real 0m4.595s 00:08:33.566 user 0m5.780s 00:08:33.566 sys 0m0.570s 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.566 12:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.566 12:39:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:33.566 12:39:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:33.566 12:39:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:33.566 12:39:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:33.566 12:39:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.566 12:39:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.566 ************************************ 00:08:33.566 START TEST raid_state_function_test 00:08:33.566 ************************************ 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63812 00:08:33.566 Process raid pid: 63812 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63812' 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63812 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63812 ']' 00:08:33.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.566 12:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.566 [2024-11-06 12:39:22.006059] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:33.566 [2024-11-06 12:39:22.006906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.566 [2024-11-06 12:39:22.196620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.825 [2024-11-06 12:39:22.347139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.107 [2024-11-06 12:39:22.577331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.107 [2024-11-06 12:39:22.577395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.674 [2024-11-06 12:39:23.067289] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.674 [2024-11-06 12:39:23.067358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.674 [2024-11-06 12:39:23.067393] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.674 [2024-11-06 12:39:23.067416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.674 [2024-11-06 12:39:23.067429] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.674 [2024-11-06 12:39:23.067448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.674 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.675 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.675 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.675 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.675 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.675 "name": "Existed_Raid", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "strip_size_kb": 64, 00:08:34.675 "state": "configuring", 00:08:34.675 "raid_level": "raid0", 00:08:34.675 "superblock": false, 00:08:34.675 "num_base_bdevs": 3, 00:08:34.675 "num_base_bdevs_discovered": 0, 00:08:34.675 "num_base_bdevs_operational": 3, 00:08:34.675 "base_bdevs_list": [ 00:08:34.675 { 00:08:34.675 "name": "BaseBdev1", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "is_configured": false, 00:08:34.675 "data_offset": 0, 00:08:34.675 "data_size": 0 00:08:34.675 }, 00:08:34.675 { 00:08:34.675 "name": "BaseBdev2", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "is_configured": false, 00:08:34.675 "data_offset": 0, 00:08:34.675 "data_size": 0 00:08:34.675 }, 00:08:34.675 { 00:08:34.675 "name": "BaseBdev3", 00:08:34.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.675 "is_configured": false, 00:08:34.675 "data_offset": 0, 00:08:34.675 "data_size": 0 00:08:34.675 } 00:08:34.675 ] 00:08:34.675 }' 00:08:34.675 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.675 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.243 [2024-11-06 12:39:23.647415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.243 [2024-11-06 12:39:23.647614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.243 [2024-11-06 12:39:23.655366] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.243 [2024-11-06 12:39:23.655444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.243 [2024-11-06 12:39:23.655464] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.243 [2024-11-06 12:39:23.655484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.243 [2024-11-06 12:39:23.655497] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.243 [2024-11-06 12:39:23.655515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.243 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.243 [2024-11-06 12:39:23.700556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.243 BaseBdev1 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 [ 00:08:35.244 { 00:08:35.244 "name": "BaseBdev1", 00:08:35.244 "aliases": [ 00:08:35.244 "b7118f13-cc06-4fbc-9d96-a271e6e051fd" 00:08:35.244 ], 00:08:35.244 "product_name": "Malloc disk", 00:08:35.244 "block_size": 512, 00:08:35.244 "num_blocks": 65536, 00:08:35.244 "uuid": "b7118f13-cc06-4fbc-9d96-a271e6e051fd", 00:08:35.244 "assigned_rate_limits": { 00:08:35.244 "rw_ios_per_sec": 0, 00:08:35.244 "rw_mbytes_per_sec": 0, 00:08:35.244 "r_mbytes_per_sec": 0, 00:08:35.244 "w_mbytes_per_sec": 0 00:08:35.244 }, 00:08:35.244 "claimed": true, 00:08:35.244 "claim_type": "exclusive_write", 00:08:35.244 "zoned": false, 00:08:35.244 "supported_io_types": { 00:08:35.244 "read": true, 00:08:35.244 "write": true, 00:08:35.244 "unmap": true, 00:08:35.244 "flush": true, 00:08:35.244 "reset": true, 00:08:35.244 "nvme_admin": false, 00:08:35.244 "nvme_io": false, 00:08:35.244 "nvme_io_md": false, 00:08:35.244 "write_zeroes": true, 00:08:35.244 "zcopy": true, 00:08:35.244 "get_zone_info": false, 00:08:35.244 "zone_management": false, 00:08:35.244 "zone_append": false, 00:08:35.244 "compare": false, 00:08:35.244 "compare_and_write": false, 00:08:35.244 "abort": true, 00:08:35.244 "seek_hole": false, 00:08:35.244 "seek_data": false, 00:08:35.244 "copy": true, 00:08:35.244 "nvme_iov_md": false 00:08:35.244 }, 00:08:35.244 "memory_domains": [ 00:08:35.244 { 00:08:35.244 "dma_device_id": "system", 00:08:35.244 "dma_device_type": 1 00:08:35.244 }, 00:08:35.244 { 00:08:35.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.244 "dma_device_type": 2 00:08:35.244 } 00:08:35.244 ], 00:08:35.244 "driver_specific": {} 00:08:35.244 } 00:08:35.244 ] 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.244 "name": "Existed_Raid", 00:08:35.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.244 "strip_size_kb": 64, 00:08:35.244 "state": "configuring", 00:08:35.244 "raid_level": "raid0", 00:08:35.244 "superblock": false, 00:08:35.244 "num_base_bdevs": 3, 00:08:35.244 "num_base_bdevs_discovered": 1, 00:08:35.244 "num_base_bdevs_operational": 3, 00:08:35.244 "base_bdevs_list": [ 00:08:35.244 { 00:08:35.244 "name": "BaseBdev1", 00:08:35.244 "uuid": "b7118f13-cc06-4fbc-9d96-a271e6e051fd", 00:08:35.244 "is_configured": true, 00:08:35.244 "data_offset": 0, 00:08:35.244 "data_size": 65536 00:08:35.244 }, 00:08:35.244 { 00:08:35.244 "name": "BaseBdev2", 00:08:35.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.244 "is_configured": false, 00:08:35.244 "data_offset": 0, 00:08:35.244 "data_size": 0 00:08:35.244 }, 00:08:35.244 { 00:08:35.244 "name": "BaseBdev3", 00:08:35.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.244 "is_configured": false, 00:08:35.244 "data_offset": 0, 00:08:35.244 "data_size": 0 00:08:35.244 } 00:08:35.244 ] 00:08:35.244 }' 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.244 12:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.812 [2024-11-06 12:39:24.244763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.812 [2024-11-06 12:39:24.244983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.812 [2024-11-06 12:39:24.256797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.812 [2024-11-06 12:39:24.259332] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.812 [2024-11-06 12:39:24.259403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.812 [2024-11-06 12:39:24.259425] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.812 [2024-11-06 12:39:24.259445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.812 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.812 "name": "Existed_Raid", 00:08:35.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.812 "strip_size_kb": 64, 00:08:35.812 "state": "configuring", 00:08:35.812 "raid_level": "raid0", 00:08:35.812 "superblock": false, 00:08:35.812 "num_base_bdevs": 3, 00:08:35.812 "num_base_bdevs_discovered": 1, 00:08:35.813 "num_base_bdevs_operational": 3, 00:08:35.813 "base_bdevs_list": [ 00:08:35.813 { 00:08:35.813 "name": "BaseBdev1", 00:08:35.813 "uuid": "b7118f13-cc06-4fbc-9d96-a271e6e051fd", 00:08:35.813 "is_configured": true, 00:08:35.813 "data_offset": 0, 00:08:35.813 "data_size": 65536 00:08:35.813 }, 00:08:35.813 { 00:08:35.813 "name": "BaseBdev2", 00:08:35.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.813 "is_configured": false, 00:08:35.813 "data_offset": 0, 00:08:35.813 "data_size": 0 00:08:35.813 }, 00:08:35.813 { 00:08:35.813 "name": "BaseBdev3", 00:08:35.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.813 "is_configured": false, 00:08:35.813 "data_offset": 0, 00:08:35.813 "data_size": 0 00:08:35.813 } 00:08:35.813 ] 00:08:35.813 }' 00:08:35.813 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.813 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 [2024-11-06 12:39:24.823493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.380 BaseBdev2 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 [ 00:08:36.380 { 00:08:36.380 "name": "BaseBdev2", 00:08:36.380 "aliases": [ 00:08:36.380 "a66d57b4-5365-453a-a11d-c988f1574ea5" 00:08:36.380 ], 00:08:36.380 "product_name": "Malloc disk", 00:08:36.380 "block_size": 512, 00:08:36.380 "num_blocks": 65536, 00:08:36.380 "uuid": "a66d57b4-5365-453a-a11d-c988f1574ea5", 00:08:36.380 "assigned_rate_limits": { 00:08:36.380 "rw_ios_per_sec": 0, 00:08:36.380 "rw_mbytes_per_sec": 0, 00:08:36.380 "r_mbytes_per_sec": 0, 00:08:36.380 "w_mbytes_per_sec": 0 00:08:36.380 }, 00:08:36.380 "claimed": true, 00:08:36.380 "claim_type": "exclusive_write", 00:08:36.380 "zoned": false, 00:08:36.380 "supported_io_types": { 00:08:36.380 "read": true, 00:08:36.380 "write": true, 00:08:36.380 "unmap": true, 00:08:36.380 "flush": true, 00:08:36.380 "reset": true, 00:08:36.380 "nvme_admin": false, 00:08:36.380 "nvme_io": false, 00:08:36.380 "nvme_io_md": false, 00:08:36.380 "write_zeroes": true, 00:08:36.380 "zcopy": true, 00:08:36.380 "get_zone_info": false, 00:08:36.380 "zone_management": false, 00:08:36.380 "zone_append": false, 00:08:36.380 "compare": false, 00:08:36.380 "compare_and_write": false, 00:08:36.380 "abort": true, 00:08:36.380 "seek_hole": false, 00:08:36.380 "seek_data": false, 00:08:36.380 "copy": true, 00:08:36.380 "nvme_iov_md": false 00:08:36.380 }, 00:08:36.380 "memory_domains": [ 00:08:36.380 { 00:08:36.380 "dma_device_id": "system", 00:08:36.380 "dma_device_type": 1 00:08:36.380 }, 00:08:36.380 { 00:08:36.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.380 "dma_device_type": 2 00:08:36.380 } 00:08:36.380 ], 00:08:36.380 "driver_specific": {} 00:08:36.380 } 00:08:36.380 ] 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.380 "name": "Existed_Raid", 00:08:36.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.380 "strip_size_kb": 64, 00:08:36.380 "state": "configuring", 00:08:36.380 "raid_level": "raid0", 00:08:36.380 "superblock": false, 00:08:36.380 "num_base_bdevs": 3, 00:08:36.380 "num_base_bdevs_discovered": 2, 00:08:36.380 "num_base_bdevs_operational": 3, 00:08:36.380 "base_bdevs_list": [ 00:08:36.380 { 00:08:36.380 "name": "BaseBdev1", 00:08:36.380 "uuid": "b7118f13-cc06-4fbc-9d96-a271e6e051fd", 00:08:36.380 "is_configured": true, 00:08:36.380 "data_offset": 0, 00:08:36.380 "data_size": 65536 00:08:36.380 }, 00:08:36.380 { 00:08:36.380 "name": "BaseBdev2", 00:08:36.380 "uuid": "a66d57b4-5365-453a-a11d-c988f1574ea5", 00:08:36.380 "is_configured": true, 00:08:36.380 "data_offset": 0, 00:08:36.380 "data_size": 65536 00:08:36.380 }, 00:08:36.380 { 00:08:36.380 "name": "BaseBdev3", 00:08:36.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.380 "is_configured": false, 00:08:36.380 "data_offset": 0, 00:08:36.380 "data_size": 0 00:08:36.380 } 00:08:36.380 ] 00:08:36.380 }' 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.380 12:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.947 [2024-11-06 12:39:25.480991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.947 [2024-11-06 12:39:25.481052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.947 [2024-11-06 12:39:25.481076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:36.947 [2024-11-06 12:39:25.481727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:36.947 [2024-11-06 12:39:25.482108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.947 [2024-11-06 12:39:25.482135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:36.947 [2024-11-06 12:39:25.482500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.947 BaseBdev3 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.947 [ 00:08:36.947 { 00:08:36.947 "name": "BaseBdev3", 00:08:36.947 "aliases": [ 00:08:36.947 "cd352c8d-1f7a-4b2a-b417-fe41c0c57590" 00:08:36.947 ], 00:08:36.947 "product_name": "Malloc disk", 00:08:36.947 "block_size": 512, 00:08:36.947 "num_blocks": 65536, 00:08:36.947 "uuid": "cd352c8d-1f7a-4b2a-b417-fe41c0c57590", 00:08:36.947 "assigned_rate_limits": { 00:08:36.947 "rw_ios_per_sec": 0, 00:08:36.947 "rw_mbytes_per_sec": 0, 00:08:36.947 "r_mbytes_per_sec": 0, 00:08:36.947 "w_mbytes_per_sec": 0 00:08:36.947 }, 00:08:36.947 "claimed": true, 00:08:36.947 "claim_type": "exclusive_write", 00:08:36.947 "zoned": false, 00:08:36.947 "supported_io_types": { 00:08:36.947 "read": true, 00:08:36.947 "write": true, 00:08:36.947 "unmap": true, 00:08:36.947 "flush": true, 00:08:36.947 "reset": true, 00:08:36.947 "nvme_admin": false, 00:08:36.947 "nvme_io": false, 00:08:36.947 "nvme_io_md": false, 00:08:36.947 "write_zeroes": true, 00:08:36.947 "zcopy": true, 00:08:36.947 "get_zone_info": false, 00:08:36.947 "zone_management": false, 00:08:36.947 "zone_append": false, 00:08:36.947 "compare": false, 00:08:36.947 "compare_and_write": false, 00:08:36.947 "abort": true, 00:08:36.947 "seek_hole": false, 00:08:36.947 "seek_data": false, 00:08:36.947 "copy": true, 00:08:36.947 "nvme_iov_md": false 00:08:36.947 }, 00:08:36.947 "memory_domains": [ 00:08:36.947 { 00:08:36.947 "dma_device_id": "system", 00:08:36.947 "dma_device_type": 1 00:08:36.947 }, 00:08:36.947 { 00:08:36.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.947 "dma_device_type": 2 00:08:36.947 } 00:08:36.947 ], 00:08:36.947 "driver_specific": {} 00:08:36.947 } 00:08:36.947 ] 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.947 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.948 "name": "Existed_Raid", 00:08:36.948 "uuid": "74edb17e-70a8-4b86-942b-7b6eed4db535", 00:08:36.948 "strip_size_kb": 64, 00:08:36.948 "state": "online", 00:08:36.948 "raid_level": "raid0", 00:08:36.948 "superblock": false, 00:08:36.948 "num_base_bdevs": 3, 00:08:36.948 "num_base_bdevs_discovered": 3, 00:08:36.948 "num_base_bdevs_operational": 3, 00:08:36.948 "base_bdevs_list": [ 00:08:36.948 { 00:08:36.948 "name": "BaseBdev1", 00:08:36.948 "uuid": "b7118f13-cc06-4fbc-9d96-a271e6e051fd", 00:08:36.948 "is_configured": true, 00:08:36.948 "data_offset": 0, 00:08:36.948 "data_size": 65536 00:08:36.948 }, 00:08:36.948 { 00:08:36.948 "name": "BaseBdev2", 00:08:36.948 "uuid": "a66d57b4-5365-453a-a11d-c988f1574ea5", 00:08:36.948 "is_configured": true, 00:08:36.948 "data_offset": 0, 00:08:36.948 "data_size": 65536 00:08:36.948 }, 00:08:36.948 { 00:08:36.948 "name": "BaseBdev3", 00:08:36.948 "uuid": "cd352c8d-1f7a-4b2a-b417-fe41c0c57590", 00:08:36.948 "is_configured": true, 00:08:36.948 "data_offset": 0, 00:08:36.948 "data_size": 65536 00:08:36.948 } 00:08:36.948 ] 00:08:36.948 }' 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.948 12:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.514 [2024-11-06 12:39:26.049618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.514 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.514 "name": "Existed_Raid", 00:08:37.514 "aliases": [ 00:08:37.514 "74edb17e-70a8-4b86-942b-7b6eed4db535" 00:08:37.514 ], 00:08:37.514 "product_name": "Raid Volume", 00:08:37.514 "block_size": 512, 00:08:37.514 "num_blocks": 196608, 00:08:37.514 "uuid": "74edb17e-70a8-4b86-942b-7b6eed4db535", 00:08:37.514 "assigned_rate_limits": { 00:08:37.514 "rw_ios_per_sec": 0, 00:08:37.514 "rw_mbytes_per_sec": 0, 00:08:37.515 "r_mbytes_per_sec": 0, 00:08:37.515 "w_mbytes_per_sec": 0 00:08:37.515 }, 00:08:37.515 "claimed": false, 00:08:37.515 "zoned": false, 00:08:37.515 "supported_io_types": { 00:08:37.515 "read": true, 00:08:37.515 "write": true, 00:08:37.515 "unmap": true, 00:08:37.515 "flush": true, 00:08:37.515 "reset": true, 00:08:37.515 "nvme_admin": false, 00:08:37.515 "nvme_io": false, 00:08:37.515 "nvme_io_md": false, 00:08:37.515 "write_zeroes": true, 00:08:37.515 "zcopy": false, 00:08:37.515 "get_zone_info": false, 00:08:37.515 "zone_management": false, 00:08:37.515 "zone_append": false, 00:08:37.515 "compare": false, 00:08:37.515 "compare_and_write": false, 00:08:37.515 "abort": false, 00:08:37.515 "seek_hole": false, 00:08:37.515 "seek_data": false, 00:08:37.515 "copy": false, 00:08:37.515 "nvme_iov_md": false 00:08:37.515 }, 00:08:37.515 "memory_domains": [ 00:08:37.515 { 00:08:37.515 "dma_device_id": "system", 00:08:37.515 "dma_device_type": 1 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.515 "dma_device_type": 2 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "dma_device_id": "system", 00:08:37.515 "dma_device_type": 1 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.515 "dma_device_type": 2 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "dma_device_id": "system", 00:08:37.515 "dma_device_type": 1 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.515 "dma_device_type": 2 00:08:37.515 } 00:08:37.515 ], 00:08:37.515 "driver_specific": { 00:08:37.515 "raid": { 00:08:37.515 "uuid": "74edb17e-70a8-4b86-942b-7b6eed4db535", 00:08:37.515 "strip_size_kb": 64, 00:08:37.515 "state": "online", 00:08:37.515 "raid_level": "raid0", 00:08:37.515 "superblock": false, 00:08:37.515 "num_base_bdevs": 3, 00:08:37.515 "num_base_bdevs_discovered": 3, 00:08:37.515 "num_base_bdevs_operational": 3, 00:08:37.515 "base_bdevs_list": [ 00:08:37.515 { 00:08:37.515 "name": "BaseBdev1", 00:08:37.515 "uuid": "b7118f13-cc06-4fbc-9d96-a271e6e051fd", 00:08:37.515 "is_configured": true, 00:08:37.515 "data_offset": 0, 00:08:37.515 "data_size": 65536 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "name": "BaseBdev2", 00:08:37.515 "uuid": "a66d57b4-5365-453a-a11d-c988f1574ea5", 00:08:37.515 "is_configured": true, 00:08:37.515 "data_offset": 0, 00:08:37.515 "data_size": 65536 00:08:37.515 }, 00:08:37.515 { 00:08:37.515 "name": "BaseBdev3", 00:08:37.515 "uuid": "cd352c8d-1f7a-4b2a-b417-fe41c0c57590", 00:08:37.515 "is_configured": true, 00:08:37.515 "data_offset": 0, 00:08:37.515 "data_size": 65536 00:08:37.515 } 00:08:37.515 ] 00:08:37.515 } 00:08:37.515 } 00:08:37.515 }' 00:08:37.515 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.515 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.515 BaseBdev2 00:08:37.515 BaseBdev3' 00:08:37.515 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.773 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.774 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.774 [2024-11-06 12:39:26.361346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.774 [2024-11-06 12:39:26.361390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.774 [2024-11-06 12:39:26.361468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.032 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.032 "name": "Existed_Raid", 00:08:38.032 "uuid": "74edb17e-70a8-4b86-942b-7b6eed4db535", 00:08:38.032 "strip_size_kb": 64, 00:08:38.032 "state": "offline", 00:08:38.032 "raid_level": "raid0", 00:08:38.032 "superblock": false, 00:08:38.032 "num_base_bdevs": 3, 00:08:38.032 "num_base_bdevs_discovered": 2, 00:08:38.032 "num_base_bdevs_operational": 2, 00:08:38.032 "base_bdevs_list": [ 00:08:38.032 { 00:08:38.032 "name": null, 00:08:38.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.032 "is_configured": false, 00:08:38.032 "data_offset": 0, 00:08:38.032 "data_size": 65536 00:08:38.032 }, 00:08:38.032 { 00:08:38.032 "name": "BaseBdev2", 00:08:38.032 "uuid": "a66d57b4-5365-453a-a11d-c988f1574ea5", 00:08:38.032 "is_configured": true, 00:08:38.032 "data_offset": 0, 00:08:38.033 "data_size": 65536 00:08:38.033 }, 00:08:38.033 { 00:08:38.033 "name": "BaseBdev3", 00:08:38.033 "uuid": "cd352c8d-1f7a-4b2a-b417-fe41c0c57590", 00:08:38.033 "is_configured": true, 00:08:38.033 "data_offset": 0, 00:08:38.033 "data_size": 65536 00:08:38.033 } 00:08:38.033 ] 00:08:38.033 }' 00:08:38.033 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.033 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 12:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 [2024-11-06 12:39:27.015812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 [2024-11-06 12:39:27.161666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.599 [2024-11-06 12:39:27.161738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:38.599 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.600 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.600 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.600 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.600 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.600 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.600 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 BaseBdev2 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.858 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 [ 00:08:38.858 { 00:08:38.858 "name": "BaseBdev2", 00:08:38.858 "aliases": [ 00:08:38.858 "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6" 00:08:38.858 ], 00:08:38.858 "product_name": "Malloc disk", 00:08:38.858 "block_size": 512, 00:08:38.858 "num_blocks": 65536, 00:08:38.858 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:38.858 "assigned_rate_limits": { 00:08:38.858 "rw_ios_per_sec": 0, 00:08:38.858 "rw_mbytes_per_sec": 0, 00:08:38.858 "r_mbytes_per_sec": 0, 00:08:38.858 "w_mbytes_per_sec": 0 00:08:38.858 }, 00:08:38.858 "claimed": false, 00:08:38.858 "zoned": false, 00:08:38.858 "supported_io_types": { 00:08:38.858 "read": true, 00:08:38.858 "write": true, 00:08:38.858 "unmap": true, 00:08:38.858 "flush": true, 00:08:38.858 "reset": true, 00:08:38.858 "nvme_admin": false, 00:08:38.858 "nvme_io": false, 00:08:38.858 "nvme_io_md": false, 00:08:38.858 "write_zeroes": true, 00:08:38.858 "zcopy": true, 00:08:38.858 "get_zone_info": false, 00:08:38.858 "zone_management": false, 00:08:38.858 "zone_append": false, 00:08:38.858 "compare": false, 00:08:38.858 "compare_and_write": false, 00:08:38.858 "abort": true, 00:08:38.858 "seek_hole": false, 00:08:38.858 "seek_data": false, 00:08:38.859 "copy": true, 00:08:38.859 "nvme_iov_md": false 00:08:38.859 }, 00:08:38.859 "memory_domains": [ 00:08:38.859 { 00:08:38.859 "dma_device_id": "system", 00:08:38.859 "dma_device_type": 1 00:08:38.859 }, 00:08:38.859 { 00:08:38.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.859 "dma_device_type": 2 00:08:38.859 } 00:08:38.859 ], 00:08:38.859 "driver_specific": {} 00:08:38.859 } 00:08:38.859 ] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.859 BaseBdev3 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.859 [ 00:08:38.859 { 00:08:38.859 "name": "BaseBdev3", 00:08:38.859 "aliases": [ 00:08:38.859 "4aefc800-8b9b-4957-929b-d59d4ac5fa06" 00:08:38.859 ], 00:08:38.859 "product_name": "Malloc disk", 00:08:38.859 "block_size": 512, 00:08:38.859 "num_blocks": 65536, 00:08:38.859 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:38.859 "assigned_rate_limits": { 00:08:38.859 "rw_ios_per_sec": 0, 00:08:38.859 "rw_mbytes_per_sec": 0, 00:08:38.859 "r_mbytes_per_sec": 0, 00:08:38.859 "w_mbytes_per_sec": 0 00:08:38.859 }, 00:08:38.859 "claimed": false, 00:08:38.859 "zoned": false, 00:08:38.859 "supported_io_types": { 00:08:38.859 "read": true, 00:08:38.859 "write": true, 00:08:38.859 "unmap": true, 00:08:38.859 "flush": true, 00:08:38.859 "reset": true, 00:08:38.859 "nvme_admin": false, 00:08:38.859 "nvme_io": false, 00:08:38.859 "nvme_io_md": false, 00:08:38.859 "write_zeroes": true, 00:08:38.859 "zcopy": true, 00:08:38.859 "get_zone_info": false, 00:08:38.859 "zone_management": false, 00:08:38.859 "zone_append": false, 00:08:38.859 "compare": false, 00:08:38.859 "compare_and_write": false, 00:08:38.859 "abort": true, 00:08:38.859 "seek_hole": false, 00:08:38.859 "seek_data": false, 00:08:38.859 "copy": true, 00:08:38.859 "nvme_iov_md": false 00:08:38.859 }, 00:08:38.859 "memory_domains": [ 00:08:38.859 { 00:08:38.859 "dma_device_id": "system", 00:08:38.859 "dma_device_type": 1 00:08:38.859 }, 00:08:38.859 { 00:08:38.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.859 "dma_device_type": 2 00:08:38.859 } 00:08:38.859 ], 00:08:38.859 "driver_specific": {} 00:08:38.859 } 00:08:38.859 ] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.859 [2024-11-06 12:39:27.458053] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.859 [2024-11-06 12:39:27.458121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.859 [2024-11-06 12:39:27.458174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.859 [2024-11-06 12:39:27.460713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.859 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.164 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.164 "name": "Existed_Raid", 00:08:39.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.164 "strip_size_kb": 64, 00:08:39.164 "state": "configuring", 00:08:39.164 "raid_level": "raid0", 00:08:39.164 "superblock": false, 00:08:39.164 "num_base_bdevs": 3, 00:08:39.164 "num_base_bdevs_discovered": 2, 00:08:39.164 "num_base_bdevs_operational": 3, 00:08:39.164 "base_bdevs_list": [ 00:08:39.164 { 00:08:39.164 "name": "BaseBdev1", 00:08:39.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.164 "is_configured": false, 00:08:39.164 "data_offset": 0, 00:08:39.164 "data_size": 0 00:08:39.164 }, 00:08:39.164 { 00:08:39.164 "name": "BaseBdev2", 00:08:39.164 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:39.164 "is_configured": true, 00:08:39.164 "data_offset": 0, 00:08:39.164 "data_size": 65536 00:08:39.164 }, 00:08:39.165 { 00:08:39.165 "name": "BaseBdev3", 00:08:39.165 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:39.165 "is_configured": true, 00:08:39.165 "data_offset": 0, 00:08:39.165 "data_size": 65536 00:08:39.165 } 00:08:39.165 ] 00:08:39.165 }' 00:08:39.165 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.165 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.423 [2024-11-06 12:39:27.994172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.423 12:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.423 "name": "Existed_Raid", 00:08:39.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.423 "strip_size_kb": 64, 00:08:39.423 "state": "configuring", 00:08:39.423 "raid_level": "raid0", 00:08:39.423 "superblock": false, 00:08:39.423 "num_base_bdevs": 3, 00:08:39.423 "num_base_bdevs_discovered": 1, 00:08:39.423 "num_base_bdevs_operational": 3, 00:08:39.423 "base_bdevs_list": [ 00:08:39.423 { 00:08:39.423 "name": "BaseBdev1", 00:08:39.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.423 "is_configured": false, 00:08:39.423 "data_offset": 0, 00:08:39.423 "data_size": 0 00:08:39.423 }, 00:08:39.423 { 00:08:39.423 "name": null, 00:08:39.423 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:39.423 "is_configured": false, 00:08:39.423 "data_offset": 0, 00:08:39.423 "data_size": 65536 00:08:39.423 }, 00:08:39.423 { 00:08:39.423 "name": "BaseBdev3", 00:08:39.423 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:39.423 "is_configured": true, 00:08:39.423 "data_offset": 0, 00:08:39.423 "data_size": 65536 00:08:39.423 } 00:08:39.423 ] 00:08:39.423 }' 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.423 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.991 [2024-11-06 12:39:28.573031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.991 BaseBdev1 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.991 [ 00:08:39.991 { 00:08:39.991 "name": "BaseBdev1", 00:08:39.991 "aliases": [ 00:08:39.991 "52fed860-c9bf-4162-be34-20b259a18725" 00:08:39.991 ], 00:08:39.991 "product_name": "Malloc disk", 00:08:39.991 "block_size": 512, 00:08:39.991 "num_blocks": 65536, 00:08:39.991 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:39.991 "assigned_rate_limits": { 00:08:39.991 "rw_ios_per_sec": 0, 00:08:39.991 "rw_mbytes_per_sec": 0, 00:08:39.991 "r_mbytes_per_sec": 0, 00:08:39.991 "w_mbytes_per_sec": 0 00:08:39.991 }, 00:08:39.991 "claimed": true, 00:08:39.991 "claim_type": "exclusive_write", 00:08:39.991 "zoned": false, 00:08:39.991 "supported_io_types": { 00:08:39.991 "read": true, 00:08:39.991 "write": true, 00:08:39.991 "unmap": true, 00:08:39.991 "flush": true, 00:08:39.991 "reset": true, 00:08:39.991 "nvme_admin": false, 00:08:39.991 "nvme_io": false, 00:08:39.991 "nvme_io_md": false, 00:08:39.991 "write_zeroes": true, 00:08:39.991 "zcopy": true, 00:08:39.991 "get_zone_info": false, 00:08:39.991 "zone_management": false, 00:08:39.991 "zone_append": false, 00:08:39.991 "compare": false, 00:08:39.991 "compare_and_write": false, 00:08:39.991 "abort": true, 00:08:39.991 "seek_hole": false, 00:08:39.991 "seek_data": false, 00:08:39.991 "copy": true, 00:08:39.991 "nvme_iov_md": false 00:08:39.991 }, 00:08:39.991 "memory_domains": [ 00:08:39.991 { 00:08:39.991 "dma_device_id": "system", 00:08:39.991 "dma_device_type": 1 00:08:39.991 }, 00:08:39.991 { 00:08:39.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.991 "dma_device_type": 2 00:08:39.991 } 00:08:39.991 ], 00:08:39.991 "driver_specific": {} 00:08:39.991 } 00:08:39.991 ] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.991 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.250 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.250 "name": "Existed_Raid", 00:08:40.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.250 "strip_size_kb": 64, 00:08:40.250 "state": "configuring", 00:08:40.250 "raid_level": "raid0", 00:08:40.250 "superblock": false, 00:08:40.250 "num_base_bdevs": 3, 00:08:40.250 "num_base_bdevs_discovered": 2, 00:08:40.250 "num_base_bdevs_operational": 3, 00:08:40.250 "base_bdevs_list": [ 00:08:40.250 { 00:08:40.250 "name": "BaseBdev1", 00:08:40.250 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:40.250 "is_configured": true, 00:08:40.250 "data_offset": 0, 00:08:40.250 "data_size": 65536 00:08:40.250 }, 00:08:40.250 { 00:08:40.250 "name": null, 00:08:40.250 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:40.250 "is_configured": false, 00:08:40.250 "data_offset": 0, 00:08:40.250 "data_size": 65536 00:08:40.250 }, 00:08:40.250 { 00:08:40.250 "name": "BaseBdev3", 00:08:40.250 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:40.250 "is_configured": true, 00:08:40.250 "data_offset": 0, 00:08:40.250 "data_size": 65536 00:08:40.250 } 00:08:40.250 ] 00:08:40.250 }' 00:08:40.250 12:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.250 12:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.508 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:40.508 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.767 [2024-11-06 12:39:29.193330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.767 "name": "Existed_Raid", 00:08:40.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.767 "strip_size_kb": 64, 00:08:40.767 "state": "configuring", 00:08:40.767 "raid_level": "raid0", 00:08:40.767 "superblock": false, 00:08:40.767 "num_base_bdevs": 3, 00:08:40.767 "num_base_bdevs_discovered": 1, 00:08:40.767 "num_base_bdevs_operational": 3, 00:08:40.767 "base_bdevs_list": [ 00:08:40.767 { 00:08:40.767 "name": "BaseBdev1", 00:08:40.767 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:40.767 "is_configured": true, 00:08:40.767 "data_offset": 0, 00:08:40.767 "data_size": 65536 00:08:40.767 }, 00:08:40.767 { 00:08:40.767 "name": null, 00:08:40.767 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:40.767 "is_configured": false, 00:08:40.767 "data_offset": 0, 00:08:40.767 "data_size": 65536 00:08:40.767 }, 00:08:40.767 { 00:08:40.767 "name": null, 00:08:40.767 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:40.767 "is_configured": false, 00:08:40.767 "data_offset": 0, 00:08:40.767 "data_size": 65536 00:08:40.767 } 00:08:40.767 ] 00:08:40.767 }' 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.767 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 [2024-11-06 12:39:29.749508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.334 "name": "Existed_Raid", 00:08:41.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.334 "strip_size_kb": 64, 00:08:41.334 "state": "configuring", 00:08:41.334 "raid_level": "raid0", 00:08:41.334 "superblock": false, 00:08:41.334 "num_base_bdevs": 3, 00:08:41.334 "num_base_bdevs_discovered": 2, 00:08:41.334 "num_base_bdevs_operational": 3, 00:08:41.334 "base_bdevs_list": [ 00:08:41.334 { 00:08:41.334 "name": "BaseBdev1", 00:08:41.334 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:41.334 "is_configured": true, 00:08:41.334 "data_offset": 0, 00:08:41.334 "data_size": 65536 00:08:41.334 }, 00:08:41.334 { 00:08:41.334 "name": null, 00:08:41.334 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:41.334 "is_configured": false, 00:08:41.334 "data_offset": 0, 00:08:41.334 "data_size": 65536 00:08:41.334 }, 00:08:41.334 { 00:08:41.334 "name": "BaseBdev3", 00:08:41.334 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:41.334 "is_configured": true, 00:08:41.334 "data_offset": 0, 00:08:41.334 "data_size": 65536 00:08:41.334 } 00:08:41.334 ] 00:08:41.334 }' 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.334 12:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.899 [2024-11-06 12:39:30.329644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.899 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.899 "name": "Existed_Raid", 00:08:41.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.899 "strip_size_kb": 64, 00:08:41.899 "state": "configuring", 00:08:41.899 "raid_level": "raid0", 00:08:41.899 "superblock": false, 00:08:41.900 "num_base_bdevs": 3, 00:08:41.900 "num_base_bdevs_discovered": 1, 00:08:41.900 "num_base_bdevs_operational": 3, 00:08:41.900 "base_bdevs_list": [ 00:08:41.900 { 00:08:41.900 "name": null, 00:08:41.900 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:41.900 "is_configured": false, 00:08:41.900 "data_offset": 0, 00:08:41.900 "data_size": 65536 00:08:41.900 }, 00:08:41.900 { 00:08:41.900 "name": null, 00:08:41.900 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:41.900 "is_configured": false, 00:08:41.900 "data_offset": 0, 00:08:41.900 "data_size": 65536 00:08:41.900 }, 00:08:41.900 { 00:08:41.900 "name": "BaseBdev3", 00:08:41.900 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:41.900 "is_configured": true, 00:08:41.900 "data_offset": 0, 00:08:41.900 "data_size": 65536 00:08:41.900 } 00:08:41.900 ] 00:08:41.900 }' 00:08:41.900 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.900 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.467 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.467 12:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.467 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.467 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.467 12:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.467 [2024-11-06 12:39:31.016114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.467 "name": "Existed_Raid", 00:08:42.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.467 "strip_size_kb": 64, 00:08:42.467 "state": "configuring", 00:08:42.467 "raid_level": "raid0", 00:08:42.467 "superblock": false, 00:08:42.467 "num_base_bdevs": 3, 00:08:42.467 "num_base_bdevs_discovered": 2, 00:08:42.467 "num_base_bdevs_operational": 3, 00:08:42.467 "base_bdevs_list": [ 00:08:42.467 { 00:08:42.467 "name": null, 00:08:42.467 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:42.467 "is_configured": false, 00:08:42.467 "data_offset": 0, 00:08:42.467 "data_size": 65536 00:08:42.467 }, 00:08:42.467 { 00:08:42.467 "name": "BaseBdev2", 00:08:42.467 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:42.467 "is_configured": true, 00:08:42.467 "data_offset": 0, 00:08:42.467 "data_size": 65536 00:08:42.467 }, 00:08:42.467 { 00:08:42.467 "name": "BaseBdev3", 00:08:42.467 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:42.467 "is_configured": true, 00:08:42.467 "data_offset": 0, 00:08:42.467 "data_size": 65536 00:08:42.467 } 00:08:42.467 ] 00:08:42.467 }' 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.467 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 52fed860-c9bf-4162-be34-20b259a18725 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.034 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.293 [2024-11-06 12:39:31.694740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:43.293 [2024-11-06 12:39:31.694811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.293 [2024-11-06 12:39:31.694828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:43.293 [2024-11-06 12:39:31.695160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:43.293 [2024-11-06 12:39:31.695414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.293 [2024-11-06 12:39:31.695431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:43.293 [2024-11-06 12:39:31.695766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.293 NewBaseBdev 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.293 [ 00:08:43.293 { 00:08:43.293 "name": "NewBaseBdev", 00:08:43.293 "aliases": [ 00:08:43.293 "52fed860-c9bf-4162-be34-20b259a18725" 00:08:43.293 ], 00:08:43.293 "product_name": "Malloc disk", 00:08:43.293 "block_size": 512, 00:08:43.293 "num_blocks": 65536, 00:08:43.293 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:43.293 "assigned_rate_limits": { 00:08:43.293 "rw_ios_per_sec": 0, 00:08:43.293 "rw_mbytes_per_sec": 0, 00:08:43.293 "r_mbytes_per_sec": 0, 00:08:43.293 "w_mbytes_per_sec": 0 00:08:43.293 }, 00:08:43.293 "claimed": true, 00:08:43.293 "claim_type": "exclusive_write", 00:08:43.293 "zoned": false, 00:08:43.293 "supported_io_types": { 00:08:43.293 "read": true, 00:08:43.293 "write": true, 00:08:43.293 "unmap": true, 00:08:43.293 "flush": true, 00:08:43.293 "reset": true, 00:08:43.293 "nvme_admin": false, 00:08:43.293 "nvme_io": false, 00:08:43.293 "nvme_io_md": false, 00:08:43.293 "write_zeroes": true, 00:08:43.293 "zcopy": true, 00:08:43.293 "get_zone_info": false, 00:08:43.293 "zone_management": false, 00:08:43.293 "zone_append": false, 00:08:43.293 "compare": false, 00:08:43.293 "compare_and_write": false, 00:08:43.293 "abort": true, 00:08:43.293 "seek_hole": false, 00:08:43.293 "seek_data": false, 00:08:43.293 "copy": true, 00:08:43.293 "nvme_iov_md": false 00:08:43.293 }, 00:08:43.293 "memory_domains": [ 00:08:43.293 { 00:08:43.293 "dma_device_id": "system", 00:08:43.293 "dma_device_type": 1 00:08:43.293 }, 00:08:43.293 { 00:08:43.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.293 "dma_device_type": 2 00:08:43.293 } 00:08:43.293 ], 00:08:43.293 "driver_specific": {} 00:08:43.293 } 00:08:43.293 ] 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.293 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.294 "name": "Existed_Raid", 00:08:43.294 "uuid": "7dbc5bf8-5277-468d-aa70-48ea5a619653", 00:08:43.294 "strip_size_kb": 64, 00:08:43.294 "state": "online", 00:08:43.294 "raid_level": "raid0", 00:08:43.294 "superblock": false, 00:08:43.294 "num_base_bdevs": 3, 00:08:43.294 "num_base_bdevs_discovered": 3, 00:08:43.294 "num_base_bdevs_operational": 3, 00:08:43.294 "base_bdevs_list": [ 00:08:43.294 { 00:08:43.294 "name": "NewBaseBdev", 00:08:43.294 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:43.294 "is_configured": true, 00:08:43.294 "data_offset": 0, 00:08:43.294 "data_size": 65536 00:08:43.294 }, 00:08:43.294 { 00:08:43.294 "name": "BaseBdev2", 00:08:43.294 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:43.294 "is_configured": true, 00:08:43.294 "data_offset": 0, 00:08:43.294 "data_size": 65536 00:08:43.294 }, 00:08:43.294 { 00:08:43.294 "name": "BaseBdev3", 00:08:43.294 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:43.294 "is_configured": true, 00:08:43.294 "data_offset": 0, 00:08:43.294 "data_size": 65536 00:08:43.294 } 00:08:43.294 ] 00:08:43.294 }' 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.294 12:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.862 [2024-11-06 12:39:32.291353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.862 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.862 "name": "Existed_Raid", 00:08:43.862 "aliases": [ 00:08:43.862 "7dbc5bf8-5277-468d-aa70-48ea5a619653" 00:08:43.862 ], 00:08:43.862 "product_name": "Raid Volume", 00:08:43.862 "block_size": 512, 00:08:43.862 "num_blocks": 196608, 00:08:43.862 "uuid": "7dbc5bf8-5277-468d-aa70-48ea5a619653", 00:08:43.862 "assigned_rate_limits": { 00:08:43.862 "rw_ios_per_sec": 0, 00:08:43.862 "rw_mbytes_per_sec": 0, 00:08:43.862 "r_mbytes_per_sec": 0, 00:08:43.862 "w_mbytes_per_sec": 0 00:08:43.862 }, 00:08:43.862 "claimed": false, 00:08:43.862 "zoned": false, 00:08:43.862 "supported_io_types": { 00:08:43.862 "read": true, 00:08:43.862 "write": true, 00:08:43.862 "unmap": true, 00:08:43.862 "flush": true, 00:08:43.862 "reset": true, 00:08:43.862 "nvme_admin": false, 00:08:43.862 "nvme_io": false, 00:08:43.862 "nvme_io_md": false, 00:08:43.862 "write_zeroes": true, 00:08:43.862 "zcopy": false, 00:08:43.862 "get_zone_info": false, 00:08:43.862 "zone_management": false, 00:08:43.862 "zone_append": false, 00:08:43.862 "compare": false, 00:08:43.862 "compare_and_write": false, 00:08:43.862 "abort": false, 00:08:43.862 "seek_hole": false, 00:08:43.862 "seek_data": false, 00:08:43.862 "copy": false, 00:08:43.862 "nvme_iov_md": false 00:08:43.862 }, 00:08:43.862 "memory_domains": [ 00:08:43.862 { 00:08:43.862 "dma_device_id": "system", 00:08:43.862 "dma_device_type": 1 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.862 "dma_device_type": 2 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "dma_device_id": "system", 00:08:43.862 "dma_device_type": 1 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.862 "dma_device_type": 2 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "dma_device_id": "system", 00:08:43.862 "dma_device_type": 1 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.862 "dma_device_type": 2 00:08:43.862 } 00:08:43.862 ], 00:08:43.862 "driver_specific": { 00:08:43.862 "raid": { 00:08:43.862 "uuid": "7dbc5bf8-5277-468d-aa70-48ea5a619653", 00:08:43.862 "strip_size_kb": 64, 00:08:43.862 "state": "online", 00:08:43.862 "raid_level": "raid0", 00:08:43.862 "superblock": false, 00:08:43.862 "num_base_bdevs": 3, 00:08:43.862 "num_base_bdevs_discovered": 3, 00:08:43.862 "num_base_bdevs_operational": 3, 00:08:43.862 "base_bdevs_list": [ 00:08:43.862 { 00:08:43.862 "name": "NewBaseBdev", 00:08:43.862 "uuid": "52fed860-c9bf-4162-be34-20b259a18725", 00:08:43.862 "is_configured": true, 00:08:43.862 "data_offset": 0, 00:08:43.862 "data_size": 65536 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "name": "BaseBdev2", 00:08:43.862 "uuid": "d7be5a04-bb3d-4b03-9b6f-e7e32537ead6", 00:08:43.862 "is_configured": true, 00:08:43.862 "data_offset": 0, 00:08:43.862 "data_size": 65536 00:08:43.862 }, 00:08:43.862 { 00:08:43.862 "name": "BaseBdev3", 00:08:43.862 "uuid": "4aefc800-8b9b-4957-929b-d59d4ac5fa06", 00:08:43.862 "is_configured": true, 00:08:43.862 "data_offset": 0, 00:08:43.862 "data_size": 65536 00:08:43.862 } 00:08:43.862 ] 00:08:43.862 } 00:08:43.862 } 00:08:43.862 }' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:43.863 BaseBdev2 00:08:43.863 BaseBdev3' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.863 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 [2024-11-06 12:39:32.587062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.122 [2024-11-06 12:39:32.587298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.122 [2024-11-06 12:39:32.587452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.122 [2024-11-06 12:39:32.587539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.122 [2024-11-06 12:39:32.587561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63812 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63812 ']' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63812 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63812 00:08:44.122 killing process with pid 63812 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63812' 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63812 00:08:44.122 [2024-11-06 12:39:32.628895] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.122 12:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63812 00:08:44.382 [2024-11-06 12:39:32.901283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.315 12:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.315 00:08:45.315 real 0m12.049s 00:08:45.315 user 0m20.031s 00:08:45.315 sys 0m1.645s 00:08:45.315 12:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.315 ************************************ 00:08:45.315 END TEST raid_state_function_test 00:08:45.315 ************************************ 00:08:45.315 12:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.574 12:39:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:45.574 12:39:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:45.574 12:39:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.574 12:39:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.574 ************************************ 00:08:45.574 START TEST raid_state_function_test_sb 00:08:45.574 ************************************ 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:45.574 Process raid pid: 64453 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64453 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64453' 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64453 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64453 ']' 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.574 12:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.574 12:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.574 [2024-11-06 12:39:34.097616] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:45.574 [2024-11-06 12:39:34.097803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.833 [2024-11-06 12:39:34.284513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.833 [2024-11-06 12:39:34.441087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.091 [2024-11-06 12:39:34.649560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.091 [2024-11-06 12:39:34.649635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.657 [2024-11-06 12:39:35.117565] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.657 [2024-11-06 12:39:35.117638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.657 [2024-11-06 12:39:35.117658] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.657 [2024-11-06 12:39:35.117675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.657 [2024-11-06 12:39:35.117686] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.657 [2024-11-06 12:39:35.117701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.657 "name": "Existed_Raid", 00:08:46.657 "uuid": "056d1453-ccf7-4374-88b0-8fae3dc246d8", 00:08:46.657 "strip_size_kb": 64, 00:08:46.657 "state": "configuring", 00:08:46.657 "raid_level": "raid0", 00:08:46.657 "superblock": true, 00:08:46.657 "num_base_bdevs": 3, 00:08:46.657 "num_base_bdevs_discovered": 0, 00:08:46.657 "num_base_bdevs_operational": 3, 00:08:46.657 "base_bdevs_list": [ 00:08:46.657 { 00:08:46.657 "name": "BaseBdev1", 00:08:46.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.657 "is_configured": false, 00:08:46.657 "data_offset": 0, 00:08:46.657 "data_size": 0 00:08:46.657 }, 00:08:46.657 { 00:08:46.657 "name": "BaseBdev2", 00:08:46.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.657 "is_configured": false, 00:08:46.657 "data_offset": 0, 00:08:46.657 "data_size": 0 00:08:46.657 }, 00:08:46.657 { 00:08:46.657 "name": "BaseBdev3", 00:08:46.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.657 "is_configured": false, 00:08:46.657 "data_offset": 0, 00:08:46.657 "data_size": 0 00:08:46.657 } 00:08:46.657 ] 00:08:46.657 }' 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.657 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 [2024-11-06 12:39:35.603151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.225 [2024-11-06 12:39:35.603218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 [2024-11-06 12:39:35.615139] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.225 [2024-11-06 12:39:35.615244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.225 [2024-11-06 12:39:35.615275] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.225 [2024-11-06 12:39:35.615293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.225 [2024-11-06 12:39:35.615302] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.225 [2024-11-06 12:39:35.615315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 [2024-11-06 12:39:35.661639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.225 BaseBdev1 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 [ 00:08:47.225 { 00:08:47.225 "name": "BaseBdev1", 00:08:47.225 "aliases": [ 00:08:47.225 "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3" 00:08:47.225 ], 00:08:47.225 "product_name": "Malloc disk", 00:08:47.225 "block_size": 512, 00:08:47.225 "num_blocks": 65536, 00:08:47.225 "uuid": "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3", 00:08:47.225 "assigned_rate_limits": { 00:08:47.225 "rw_ios_per_sec": 0, 00:08:47.225 "rw_mbytes_per_sec": 0, 00:08:47.225 "r_mbytes_per_sec": 0, 00:08:47.225 "w_mbytes_per_sec": 0 00:08:47.225 }, 00:08:47.225 "claimed": true, 00:08:47.225 "claim_type": "exclusive_write", 00:08:47.225 "zoned": false, 00:08:47.225 "supported_io_types": { 00:08:47.225 "read": true, 00:08:47.225 "write": true, 00:08:47.225 "unmap": true, 00:08:47.225 "flush": true, 00:08:47.225 "reset": true, 00:08:47.225 "nvme_admin": false, 00:08:47.225 "nvme_io": false, 00:08:47.225 "nvme_io_md": false, 00:08:47.225 "write_zeroes": true, 00:08:47.225 "zcopy": true, 00:08:47.225 "get_zone_info": false, 00:08:47.225 "zone_management": false, 00:08:47.225 "zone_append": false, 00:08:47.225 "compare": false, 00:08:47.225 "compare_and_write": false, 00:08:47.225 "abort": true, 00:08:47.225 "seek_hole": false, 00:08:47.225 "seek_data": false, 00:08:47.225 "copy": true, 00:08:47.225 "nvme_iov_md": false 00:08:47.225 }, 00:08:47.225 "memory_domains": [ 00:08:47.225 { 00:08:47.225 "dma_device_id": "system", 00:08:47.225 "dma_device_type": 1 00:08:47.225 }, 00:08:47.225 { 00:08:47.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.225 "dma_device_type": 2 00:08:47.225 } 00:08:47.225 ], 00:08:47.225 "driver_specific": {} 00:08:47.225 } 00:08:47.225 ] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.225 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.225 "name": "Existed_Raid", 00:08:47.225 "uuid": "4a3e4b37-8341-4598-8ab0-46c51936d896", 00:08:47.225 "strip_size_kb": 64, 00:08:47.225 "state": "configuring", 00:08:47.225 "raid_level": "raid0", 00:08:47.225 "superblock": true, 00:08:47.225 "num_base_bdevs": 3, 00:08:47.225 "num_base_bdevs_discovered": 1, 00:08:47.225 "num_base_bdevs_operational": 3, 00:08:47.225 "base_bdevs_list": [ 00:08:47.225 { 00:08:47.225 "name": "BaseBdev1", 00:08:47.225 "uuid": "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3", 00:08:47.225 "is_configured": true, 00:08:47.225 "data_offset": 2048, 00:08:47.225 "data_size": 63488 00:08:47.225 }, 00:08:47.225 { 00:08:47.225 "name": "BaseBdev2", 00:08:47.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.226 "is_configured": false, 00:08:47.226 "data_offset": 0, 00:08:47.226 "data_size": 0 00:08:47.226 }, 00:08:47.226 { 00:08:47.226 "name": "BaseBdev3", 00:08:47.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.226 "is_configured": false, 00:08:47.226 "data_offset": 0, 00:08:47.226 "data_size": 0 00:08:47.226 } 00:08:47.226 ] 00:08:47.226 }' 00:08:47.226 12:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.226 12:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.793 [2024-11-06 12:39:36.169905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.793 [2024-11-06 12:39:36.169979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.793 [2024-11-06 12:39:36.181934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.793 [2024-11-06 12:39:36.184708] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.793 [2024-11-06 12:39:36.184801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.793 [2024-11-06 12:39:36.184830] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.793 [2024-11-06 12:39:36.184860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.793 "name": "Existed_Raid", 00:08:47.793 "uuid": "c8ef4043-51d1-4751-91e4-f6241ab956fa", 00:08:47.793 "strip_size_kb": 64, 00:08:47.793 "state": "configuring", 00:08:47.793 "raid_level": "raid0", 00:08:47.793 "superblock": true, 00:08:47.793 "num_base_bdevs": 3, 00:08:47.793 "num_base_bdevs_discovered": 1, 00:08:47.793 "num_base_bdevs_operational": 3, 00:08:47.793 "base_bdevs_list": [ 00:08:47.793 { 00:08:47.793 "name": "BaseBdev1", 00:08:47.793 "uuid": "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3", 00:08:47.793 "is_configured": true, 00:08:47.793 "data_offset": 2048, 00:08:47.793 "data_size": 63488 00:08:47.793 }, 00:08:47.793 { 00:08:47.793 "name": "BaseBdev2", 00:08:47.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.793 "is_configured": false, 00:08:47.793 "data_offset": 0, 00:08:47.793 "data_size": 0 00:08:47.793 }, 00:08:47.793 { 00:08:47.793 "name": "BaseBdev3", 00:08:47.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.793 "is_configured": false, 00:08:47.793 "data_offset": 0, 00:08:47.793 "data_size": 0 00:08:47.793 } 00:08:47.793 ] 00:08:47.793 }' 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.793 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.052 [2024-11-06 12:39:36.685501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.052 BaseBdev2 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.052 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.311 [ 00:08:48.311 { 00:08:48.311 "name": "BaseBdev2", 00:08:48.311 "aliases": [ 00:08:48.311 "d2053f43-73a1-4312-b614-8a7fcef65426" 00:08:48.311 ], 00:08:48.311 "product_name": "Malloc disk", 00:08:48.311 "block_size": 512, 00:08:48.311 "num_blocks": 65536, 00:08:48.311 "uuid": "d2053f43-73a1-4312-b614-8a7fcef65426", 00:08:48.311 "assigned_rate_limits": { 00:08:48.311 "rw_ios_per_sec": 0, 00:08:48.311 "rw_mbytes_per_sec": 0, 00:08:48.311 "r_mbytes_per_sec": 0, 00:08:48.311 "w_mbytes_per_sec": 0 00:08:48.311 }, 00:08:48.311 "claimed": true, 00:08:48.311 "claim_type": "exclusive_write", 00:08:48.311 "zoned": false, 00:08:48.311 "supported_io_types": { 00:08:48.311 "read": true, 00:08:48.311 "write": true, 00:08:48.311 "unmap": true, 00:08:48.311 "flush": true, 00:08:48.311 "reset": true, 00:08:48.311 "nvme_admin": false, 00:08:48.311 "nvme_io": false, 00:08:48.311 "nvme_io_md": false, 00:08:48.311 "write_zeroes": true, 00:08:48.311 "zcopy": true, 00:08:48.311 "get_zone_info": false, 00:08:48.311 "zone_management": false, 00:08:48.311 "zone_append": false, 00:08:48.311 "compare": false, 00:08:48.311 "compare_and_write": false, 00:08:48.311 "abort": true, 00:08:48.311 "seek_hole": false, 00:08:48.311 "seek_data": false, 00:08:48.311 "copy": true, 00:08:48.311 "nvme_iov_md": false 00:08:48.311 }, 00:08:48.311 "memory_domains": [ 00:08:48.311 { 00:08:48.311 "dma_device_id": "system", 00:08:48.311 "dma_device_type": 1 00:08:48.311 }, 00:08:48.311 { 00:08:48.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.311 "dma_device_type": 2 00:08:48.311 } 00:08:48.311 ], 00:08:48.311 "driver_specific": {} 00:08:48.311 } 00:08:48.311 ] 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.311 "name": "Existed_Raid", 00:08:48.311 "uuid": "c8ef4043-51d1-4751-91e4-f6241ab956fa", 00:08:48.311 "strip_size_kb": 64, 00:08:48.311 "state": "configuring", 00:08:48.311 "raid_level": "raid0", 00:08:48.311 "superblock": true, 00:08:48.311 "num_base_bdevs": 3, 00:08:48.311 "num_base_bdevs_discovered": 2, 00:08:48.311 "num_base_bdevs_operational": 3, 00:08:48.311 "base_bdevs_list": [ 00:08:48.311 { 00:08:48.311 "name": "BaseBdev1", 00:08:48.311 "uuid": "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3", 00:08:48.311 "is_configured": true, 00:08:48.311 "data_offset": 2048, 00:08:48.311 "data_size": 63488 00:08:48.311 }, 00:08:48.311 { 00:08:48.311 "name": "BaseBdev2", 00:08:48.311 "uuid": "d2053f43-73a1-4312-b614-8a7fcef65426", 00:08:48.311 "is_configured": true, 00:08:48.311 "data_offset": 2048, 00:08:48.311 "data_size": 63488 00:08:48.311 }, 00:08:48.311 { 00:08:48.311 "name": "BaseBdev3", 00:08:48.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.311 "is_configured": false, 00:08:48.311 "data_offset": 0, 00:08:48.311 "data_size": 0 00:08:48.311 } 00:08:48.311 ] 00:08:48.311 }' 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.311 12:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 [2024-11-06 12:39:37.294658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.879 [2024-11-06 12:39:37.294987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.879 [2024-11-06 12:39:37.295020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.879 [2024-11-06 12:39:37.295421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:48.879 [2024-11-06 12:39:37.295629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.879 [2024-11-06 12:39:37.295647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.879 [2024-11-06 12:39:37.295843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.879 BaseBdev3 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.879 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.879 [ 00:08:48.879 { 00:08:48.879 "name": "BaseBdev3", 00:08:48.879 "aliases": [ 00:08:48.879 "e088799e-d055-4b2c-92be-ed6bda070701" 00:08:48.879 ], 00:08:48.879 "product_name": "Malloc disk", 00:08:48.879 "block_size": 512, 00:08:48.879 "num_blocks": 65536, 00:08:48.879 "uuid": "e088799e-d055-4b2c-92be-ed6bda070701", 00:08:48.879 "assigned_rate_limits": { 00:08:48.879 "rw_ios_per_sec": 0, 00:08:48.879 "rw_mbytes_per_sec": 0, 00:08:48.879 "r_mbytes_per_sec": 0, 00:08:48.880 "w_mbytes_per_sec": 0 00:08:48.880 }, 00:08:48.880 "claimed": true, 00:08:48.880 "claim_type": "exclusive_write", 00:08:48.880 "zoned": false, 00:08:48.880 "supported_io_types": { 00:08:48.880 "read": true, 00:08:48.880 "write": true, 00:08:48.880 "unmap": true, 00:08:48.880 "flush": true, 00:08:48.880 "reset": true, 00:08:48.880 "nvme_admin": false, 00:08:48.880 "nvme_io": false, 00:08:48.880 "nvme_io_md": false, 00:08:48.880 "write_zeroes": true, 00:08:48.880 "zcopy": true, 00:08:48.880 "get_zone_info": false, 00:08:48.880 "zone_management": false, 00:08:48.880 "zone_append": false, 00:08:48.880 "compare": false, 00:08:48.880 "compare_and_write": false, 00:08:48.880 "abort": true, 00:08:48.880 "seek_hole": false, 00:08:48.880 "seek_data": false, 00:08:48.880 "copy": true, 00:08:48.880 "nvme_iov_md": false 00:08:48.880 }, 00:08:48.880 "memory_domains": [ 00:08:48.880 { 00:08:48.880 "dma_device_id": "system", 00:08:48.880 "dma_device_type": 1 00:08:48.880 }, 00:08:48.880 { 00:08:48.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.880 "dma_device_type": 2 00:08:48.880 } 00:08:48.880 ], 00:08:48.880 "driver_specific": {} 00:08:48.880 } 00:08:48.880 ] 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.880 "name": "Existed_Raid", 00:08:48.880 "uuid": "c8ef4043-51d1-4751-91e4-f6241ab956fa", 00:08:48.880 "strip_size_kb": 64, 00:08:48.880 "state": "online", 00:08:48.880 "raid_level": "raid0", 00:08:48.880 "superblock": true, 00:08:48.880 "num_base_bdevs": 3, 00:08:48.880 "num_base_bdevs_discovered": 3, 00:08:48.880 "num_base_bdevs_operational": 3, 00:08:48.880 "base_bdevs_list": [ 00:08:48.880 { 00:08:48.880 "name": "BaseBdev1", 00:08:48.880 "uuid": "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3", 00:08:48.880 "is_configured": true, 00:08:48.880 "data_offset": 2048, 00:08:48.880 "data_size": 63488 00:08:48.880 }, 00:08:48.880 { 00:08:48.880 "name": "BaseBdev2", 00:08:48.880 "uuid": "d2053f43-73a1-4312-b614-8a7fcef65426", 00:08:48.880 "is_configured": true, 00:08:48.880 "data_offset": 2048, 00:08:48.880 "data_size": 63488 00:08:48.880 }, 00:08:48.880 { 00:08:48.880 "name": "BaseBdev3", 00:08:48.880 "uuid": "e088799e-d055-4b2c-92be-ed6bda070701", 00:08:48.880 "is_configured": true, 00:08:48.880 "data_offset": 2048, 00:08:48.880 "data_size": 63488 00:08:48.880 } 00:08:48.880 ] 00:08:48.880 }' 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.880 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.448 [2024-11-06 12:39:37.835311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.448 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.448 "name": "Existed_Raid", 00:08:49.448 "aliases": [ 00:08:49.448 "c8ef4043-51d1-4751-91e4-f6241ab956fa" 00:08:49.448 ], 00:08:49.448 "product_name": "Raid Volume", 00:08:49.448 "block_size": 512, 00:08:49.448 "num_blocks": 190464, 00:08:49.448 "uuid": "c8ef4043-51d1-4751-91e4-f6241ab956fa", 00:08:49.448 "assigned_rate_limits": { 00:08:49.448 "rw_ios_per_sec": 0, 00:08:49.448 "rw_mbytes_per_sec": 0, 00:08:49.449 "r_mbytes_per_sec": 0, 00:08:49.449 "w_mbytes_per_sec": 0 00:08:49.449 }, 00:08:49.449 "claimed": false, 00:08:49.449 "zoned": false, 00:08:49.449 "supported_io_types": { 00:08:49.449 "read": true, 00:08:49.449 "write": true, 00:08:49.449 "unmap": true, 00:08:49.449 "flush": true, 00:08:49.449 "reset": true, 00:08:49.449 "nvme_admin": false, 00:08:49.449 "nvme_io": false, 00:08:49.449 "nvme_io_md": false, 00:08:49.449 "write_zeroes": true, 00:08:49.449 "zcopy": false, 00:08:49.449 "get_zone_info": false, 00:08:49.449 "zone_management": false, 00:08:49.449 "zone_append": false, 00:08:49.449 "compare": false, 00:08:49.449 "compare_and_write": false, 00:08:49.449 "abort": false, 00:08:49.449 "seek_hole": false, 00:08:49.449 "seek_data": false, 00:08:49.449 "copy": false, 00:08:49.449 "nvme_iov_md": false 00:08:49.449 }, 00:08:49.449 "memory_domains": [ 00:08:49.449 { 00:08:49.449 "dma_device_id": "system", 00:08:49.449 "dma_device_type": 1 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.449 "dma_device_type": 2 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "dma_device_id": "system", 00:08:49.449 "dma_device_type": 1 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.449 "dma_device_type": 2 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "dma_device_id": "system", 00:08:49.449 "dma_device_type": 1 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.449 "dma_device_type": 2 00:08:49.449 } 00:08:49.449 ], 00:08:49.449 "driver_specific": { 00:08:49.449 "raid": { 00:08:49.449 "uuid": "c8ef4043-51d1-4751-91e4-f6241ab956fa", 00:08:49.449 "strip_size_kb": 64, 00:08:49.449 "state": "online", 00:08:49.449 "raid_level": "raid0", 00:08:49.449 "superblock": true, 00:08:49.449 "num_base_bdevs": 3, 00:08:49.449 "num_base_bdevs_discovered": 3, 00:08:49.449 "num_base_bdevs_operational": 3, 00:08:49.449 "base_bdevs_list": [ 00:08:49.449 { 00:08:49.449 "name": "BaseBdev1", 00:08:49.449 "uuid": "17bb1a7d-2acb-4fb9-bdd1-e61d491ed4b3", 00:08:49.449 "is_configured": true, 00:08:49.449 "data_offset": 2048, 00:08:49.449 "data_size": 63488 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "name": "BaseBdev2", 00:08:49.449 "uuid": "d2053f43-73a1-4312-b614-8a7fcef65426", 00:08:49.449 "is_configured": true, 00:08:49.449 "data_offset": 2048, 00:08:49.449 "data_size": 63488 00:08:49.449 }, 00:08:49.449 { 00:08:49.449 "name": "BaseBdev3", 00:08:49.449 "uuid": "e088799e-d055-4b2c-92be-ed6bda070701", 00:08:49.449 "is_configured": true, 00:08:49.449 "data_offset": 2048, 00:08:49.449 "data_size": 63488 00:08:49.449 } 00:08:49.449 ] 00:08:49.449 } 00:08:49.449 } 00:08:49.449 }' 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.449 BaseBdev2 00:08:49.449 BaseBdev3' 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.449 12:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.449 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.707 [2024-11-06 12:39:38.171084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.707 [2024-11-06 12:39:38.171125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.707 [2024-11-06 12:39:38.171222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.707 "name": "Existed_Raid", 00:08:49.707 "uuid": "c8ef4043-51d1-4751-91e4-f6241ab956fa", 00:08:49.707 "strip_size_kb": 64, 00:08:49.707 "state": "offline", 00:08:49.707 "raid_level": "raid0", 00:08:49.707 "superblock": true, 00:08:49.707 "num_base_bdevs": 3, 00:08:49.707 "num_base_bdevs_discovered": 2, 00:08:49.707 "num_base_bdevs_operational": 2, 00:08:49.707 "base_bdevs_list": [ 00:08:49.707 { 00:08:49.707 "name": null, 00:08:49.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.707 "is_configured": false, 00:08:49.707 "data_offset": 0, 00:08:49.707 "data_size": 63488 00:08:49.707 }, 00:08:49.707 { 00:08:49.707 "name": "BaseBdev2", 00:08:49.707 "uuid": "d2053f43-73a1-4312-b614-8a7fcef65426", 00:08:49.707 "is_configured": true, 00:08:49.707 "data_offset": 2048, 00:08:49.707 "data_size": 63488 00:08:49.707 }, 00:08:49.707 { 00:08:49.707 "name": "BaseBdev3", 00:08:49.707 "uuid": "e088799e-d055-4b2c-92be-ed6bda070701", 00:08:49.707 "is_configured": true, 00:08:49.707 "data_offset": 2048, 00:08:49.707 "data_size": 63488 00:08:49.707 } 00:08:49.707 ] 00:08:49.707 }' 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.707 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.273 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.274 [2024-11-06 12:39:38.845901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.532 12:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.532 [2024-11-06 12:39:38.997661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.532 [2024-11-06 12:39:38.997730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.532 BaseBdev2 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.532 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.791 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.791 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.791 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.791 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.791 [ 00:08:50.791 { 00:08:50.791 "name": "BaseBdev2", 00:08:50.791 "aliases": [ 00:08:50.791 "dbc17a00-e508-4f9f-a411-f4857cc305cc" 00:08:50.791 ], 00:08:50.791 "product_name": "Malloc disk", 00:08:50.791 "block_size": 512, 00:08:50.791 "num_blocks": 65536, 00:08:50.791 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:50.791 "assigned_rate_limits": { 00:08:50.791 "rw_ios_per_sec": 0, 00:08:50.791 "rw_mbytes_per_sec": 0, 00:08:50.791 "r_mbytes_per_sec": 0, 00:08:50.791 "w_mbytes_per_sec": 0 00:08:50.791 }, 00:08:50.791 "claimed": false, 00:08:50.791 "zoned": false, 00:08:50.791 "supported_io_types": { 00:08:50.791 "read": true, 00:08:50.791 "write": true, 00:08:50.791 "unmap": true, 00:08:50.791 "flush": true, 00:08:50.791 "reset": true, 00:08:50.791 "nvme_admin": false, 00:08:50.791 "nvme_io": false, 00:08:50.791 "nvme_io_md": false, 00:08:50.791 "write_zeroes": true, 00:08:50.791 "zcopy": true, 00:08:50.791 "get_zone_info": false, 00:08:50.791 "zone_management": false, 00:08:50.791 "zone_append": false, 00:08:50.791 "compare": false, 00:08:50.791 "compare_and_write": false, 00:08:50.791 "abort": true, 00:08:50.791 "seek_hole": false, 00:08:50.791 "seek_data": false, 00:08:50.792 "copy": true, 00:08:50.792 "nvme_iov_md": false 00:08:50.792 }, 00:08:50.792 "memory_domains": [ 00:08:50.792 { 00:08:50.792 "dma_device_id": "system", 00:08:50.792 "dma_device_type": 1 00:08:50.792 }, 00:08:50.792 { 00:08:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.792 "dma_device_type": 2 00:08:50.792 } 00:08:50.792 ], 00:08:50.792 "driver_specific": {} 00:08:50.792 } 00:08:50.792 ] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.792 BaseBdev3 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.792 [ 00:08:50.792 { 00:08:50.792 "name": "BaseBdev3", 00:08:50.792 "aliases": [ 00:08:50.792 "91f033fe-5640-4305-8153-81bb4b28d553" 00:08:50.792 ], 00:08:50.792 "product_name": "Malloc disk", 00:08:50.792 "block_size": 512, 00:08:50.792 "num_blocks": 65536, 00:08:50.792 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:50.792 "assigned_rate_limits": { 00:08:50.792 "rw_ios_per_sec": 0, 00:08:50.792 "rw_mbytes_per_sec": 0, 00:08:50.792 "r_mbytes_per_sec": 0, 00:08:50.792 "w_mbytes_per_sec": 0 00:08:50.792 }, 00:08:50.792 "claimed": false, 00:08:50.792 "zoned": false, 00:08:50.792 "supported_io_types": { 00:08:50.792 "read": true, 00:08:50.792 "write": true, 00:08:50.792 "unmap": true, 00:08:50.792 "flush": true, 00:08:50.792 "reset": true, 00:08:50.792 "nvme_admin": false, 00:08:50.792 "nvme_io": false, 00:08:50.792 "nvme_io_md": false, 00:08:50.792 "write_zeroes": true, 00:08:50.792 "zcopy": true, 00:08:50.792 "get_zone_info": false, 00:08:50.792 "zone_management": false, 00:08:50.792 "zone_append": false, 00:08:50.792 "compare": false, 00:08:50.792 "compare_and_write": false, 00:08:50.792 "abort": true, 00:08:50.792 "seek_hole": false, 00:08:50.792 "seek_data": false, 00:08:50.792 "copy": true, 00:08:50.792 "nvme_iov_md": false 00:08:50.792 }, 00:08:50.792 "memory_domains": [ 00:08:50.792 { 00:08:50.792 "dma_device_id": "system", 00:08:50.792 "dma_device_type": 1 00:08:50.792 }, 00:08:50.792 { 00:08:50.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.792 "dma_device_type": 2 00:08:50.792 } 00:08:50.792 ], 00:08:50.792 "driver_specific": {} 00:08:50.792 } 00:08:50.792 ] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.792 [2024-11-06 12:39:39.300677] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.792 [2024-11-06 12:39:39.300875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.792 [2024-11-06 12:39:39.301021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.792 [2024-11-06 12:39:39.303582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.792 "name": "Existed_Raid", 00:08:50.792 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:50.792 "strip_size_kb": 64, 00:08:50.792 "state": "configuring", 00:08:50.792 "raid_level": "raid0", 00:08:50.792 "superblock": true, 00:08:50.792 "num_base_bdevs": 3, 00:08:50.792 "num_base_bdevs_discovered": 2, 00:08:50.792 "num_base_bdevs_operational": 3, 00:08:50.792 "base_bdevs_list": [ 00:08:50.792 { 00:08:50.792 "name": "BaseBdev1", 00:08:50.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.792 "is_configured": false, 00:08:50.792 "data_offset": 0, 00:08:50.792 "data_size": 0 00:08:50.792 }, 00:08:50.792 { 00:08:50.792 "name": "BaseBdev2", 00:08:50.792 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:50.792 "is_configured": true, 00:08:50.792 "data_offset": 2048, 00:08:50.792 "data_size": 63488 00:08:50.792 }, 00:08:50.792 { 00:08:50.792 "name": "BaseBdev3", 00:08:50.792 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:50.792 "is_configured": true, 00:08:50.792 "data_offset": 2048, 00:08:50.792 "data_size": 63488 00:08:50.792 } 00:08:50.792 ] 00:08:50.792 }' 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.792 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.381 [2024-11-06 12:39:39.812799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.381 "name": "Existed_Raid", 00:08:51.381 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:51.381 "strip_size_kb": 64, 00:08:51.381 "state": "configuring", 00:08:51.381 "raid_level": "raid0", 00:08:51.381 "superblock": true, 00:08:51.381 "num_base_bdevs": 3, 00:08:51.381 "num_base_bdevs_discovered": 1, 00:08:51.381 "num_base_bdevs_operational": 3, 00:08:51.381 "base_bdevs_list": [ 00:08:51.381 { 00:08:51.381 "name": "BaseBdev1", 00:08:51.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.381 "is_configured": false, 00:08:51.381 "data_offset": 0, 00:08:51.381 "data_size": 0 00:08:51.381 }, 00:08:51.381 { 00:08:51.381 "name": null, 00:08:51.381 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:51.381 "is_configured": false, 00:08:51.381 "data_offset": 0, 00:08:51.381 "data_size": 63488 00:08:51.381 }, 00:08:51.381 { 00:08:51.381 "name": "BaseBdev3", 00:08:51.381 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:51.381 "is_configured": true, 00:08:51.381 "data_offset": 2048, 00:08:51.381 "data_size": 63488 00:08:51.381 } 00:08:51.381 ] 00:08:51.381 }' 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.381 12:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.947 [2024-11-06 12:39:40.486847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.947 BaseBdev1 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.947 [ 00:08:51.947 { 00:08:51.947 "name": "BaseBdev1", 00:08:51.947 "aliases": [ 00:08:51.947 "7fd0ca75-b8b3-4622-9860-1823a3790959" 00:08:51.947 ], 00:08:51.947 "product_name": "Malloc disk", 00:08:51.947 "block_size": 512, 00:08:51.947 "num_blocks": 65536, 00:08:51.947 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:51.947 "assigned_rate_limits": { 00:08:51.947 "rw_ios_per_sec": 0, 00:08:51.947 "rw_mbytes_per_sec": 0, 00:08:51.947 "r_mbytes_per_sec": 0, 00:08:51.947 "w_mbytes_per_sec": 0 00:08:51.947 }, 00:08:51.947 "claimed": true, 00:08:51.947 "claim_type": "exclusive_write", 00:08:51.947 "zoned": false, 00:08:51.947 "supported_io_types": { 00:08:51.947 "read": true, 00:08:51.947 "write": true, 00:08:51.947 "unmap": true, 00:08:51.947 "flush": true, 00:08:51.947 "reset": true, 00:08:51.947 "nvme_admin": false, 00:08:51.947 "nvme_io": false, 00:08:51.947 "nvme_io_md": false, 00:08:51.947 "write_zeroes": true, 00:08:51.947 "zcopy": true, 00:08:51.947 "get_zone_info": false, 00:08:51.947 "zone_management": false, 00:08:51.947 "zone_append": false, 00:08:51.947 "compare": false, 00:08:51.947 "compare_and_write": false, 00:08:51.947 "abort": true, 00:08:51.947 "seek_hole": false, 00:08:51.947 "seek_data": false, 00:08:51.947 "copy": true, 00:08:51.947 "nvme_iov_md": false 00:08:51.947 }, 00:08:51.947 "memory_domains": [ 00:08:51.947 { 00:08:51.947 "dma_device_id": "system", 00:08:51.947 "dma_device_type": 1 00:08:51.947 }, 00:08:51.947 { 00:08:51.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.947 "dma_device_type": 2 00:08:51.947 } 00:08:51.947 ], 00:08:51.947 "driver_specific": {} 00:08:51.947 } 00:08:51.947 ] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.947 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.947 "name": "Existed_Raid", 00:08:51.947 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:51.947 "strip_size_kb": 64, 00:08:51.947 "state": "configuring", 00:08:51.947 "raid_level": "raid0", 00:08:51.947 "superblock": true, 00:08:51.947 "num_base_bdevs": 3, 00:08:51.947 "num_base_bdevs_discovered": 2, 00:08:51.947 "num_base_bdevs_operational": 3, 00:08:51.947 "base_bdevs_list": [ 00:08:51.947 { 00:08:51.947 "name": "BaseBdev1", 00:08:51.947 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:51.947 "is_configured": true, 00:08:51.947 "data_offset": 2048, 00:08:51.947 "data_size": 63488 00:08:51.947 }, 00:08:51.947 { 00:08:51.947 "name": null, 00:08:51.947 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:51.947 "is_configured": false, 00:08:51.947 "data_offset": 0, 00:08:51.947 "data_size": 63488 00:08:51.947 }, 00:08:51.947 { 00:08:51.947 "name": "BaseBdev3", 00:08:51.947 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:51.947 "is_configured": true, 00:08:51.947 "data_offset": 2048, 00:08:51.947 "data_size": 63488 00:08:51.947 } 00:08:51.948 ] 00:08:51.948 }' 00:08:51.948 12:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.948 12:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.514 [2024-11-06 12:39:41.083081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.514 "name": "Existed_Raid", 00:08:52.514 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:52.514 "strip_size_kb": 64, 00:08:52.514 "state": "configuring", 00:08:52.514 "raid_level": "raid0", 00:08:52.514 "superblock": true, 00:08:52.514 "num_base_bdevs": 3, 00:08:52.514 "num_base_bdevs_discovered": 1, 00:08:52.514 "num_base_bdevs_operational": 3, 00:08:52.514 "base_bdevs_list": [ 00:08:52.514 { 00:08:52.514 "name": "BaseBdev1", 00:08:52.514 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:52.514 "is_configured": true, 00:08:52.514 "data_offset": 2048, 00:08:52.514 "data_size": 63488 00:08:52.514 }, 00:08:52.514 { 00:08:52.514 "name": null, 00:08:52.514 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:52.514 "is_configured": false, 00:08:52.514 "data_offset": 0, 00:08:52.514 "data_size": 63488 00:08:52.514 }, 00:08:52.514 { 00:08:52.514 "name": null, 00:08:52.514 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:52.514 "is_configured": false, 00:08:52.514 "data_offset": 0, 00:08:52.514 "data_size": 63488 00:08:52.514 } 00:08:52.514 ] 00:08:52.514 }' 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.514 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.080 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.080 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.080 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.080 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 [2024-11-06 12:39:41.683280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.081 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.339 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.339 "name": "Existed_Raid", 00:08:53.339 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:53.339 "strip_size_kb": 64, 00:08:53.339 "state": "configuring", 00:08:53.339 "raid_level": "raid0", 00:08:53.339 "superblock": true, 00:08:53.339 "num_base_bdevs": 3, 00:08:53.339 "num_base_bdevs_discovered": 2, 00:08:53.340 "num_base_bdevs_operational": 3, 00:08:53.340 "base_bdevs_list": [ 00:08:53.340 { 00:08:53.340 "name": "BaseBdev1", 00:08:53.340 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:53.340 "is_configured": true, 00:08:53.340 "data_offset": 2048, 00:08:53.340 "data_size": 63488 00:08:53.340 }, 00:08:53.340 { 00:08:53.340 "name": null, 00:08:53.340 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:53.340 "is_configured": false, 00:08:53.340 "data_offset": 0, 00:08:53.340 "data_size": 63488 00:08:53.340 }, 00:08:53.340 { 00:08:53.340 "name": "BaseBdev3", 00:08:53.340 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:53.340 "is_configured": true, 00:08:53.340 "data_offset": 2048, 00:08:53.340 "data_size": 63488 00:08:53.340 } 00:08:53.340 ] 00:08:53.340 }' 00:08:53.340 12:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.340 12:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.598 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.598 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.598 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.598 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.598 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.857 [2024-11-06 12:39:42.275545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.857 "name": "Existed_Raid", 00:08:53.857 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:53.857 "strip_size_kb": 64, 00:08:53.857 "state": "configuring", 00:08:53.857 "raid_level": "raid0", 00:08:53.857 "superblock": true, 00:08:53.857 "num_base_bdevs": 3, 00:08:53.857 "num_base_bdevs_discovered": 1, 00:08:53.857 "num_base_bdevs_operational": 3, 00:08:53.857 "base_bdevs_list": [ 00:08:53.857 { 00:08:53.857 "name": null, 00:08:53.857 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:53.857 "is_configured": false, 00:08:53.857 "data_offset": 0, 00:08:53.857 "data_size": 63488 00:08:53.857 }, 00:08:53.857 { 00:08:53.857 "name": null, 00:08:53.857 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:53.857 "is_configured": false, 00:08:53.857 "data_offset": 0, 00:08:53.857 "data_size": 63488 00:08:53.857 }, 00:08:53.857 { 00:08:53.857 "name": "BaseBdev3", 00:08:53.857 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:53.857 "is_configured": true, 00:08:53.857 "data_offset": 2048, 00:08:53.857 "data_size": 63488 00:08:53.857 } 00:08:53.857 ] 00:08:53.857 }' 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.857 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.424 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.425 [2024-11-06 12:39:42.977120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.425 12:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.425 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.425 "name": "Existed_Raid", 00:08:54.425 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:54.425 "strip_size_kb": 64, 00:08:54.425 "state": "configuring", 00:08:54.425 "raid_level": "raid0", 00:08:54.425 "superblock": true, 00:08:54.425 "num_base_bdevs": 3, 00:08:54.425 "num_base_bdevs_discovered": 2, 00:08:54.425 "num_base_bdevs_operational": 3, 00:08:54.425 "base_bdevs_list": [ 00:08:54.425 { 00:08:54.425 "name": null, 00:08:54.425 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:54.425 "is_configured": false, 00:08:54.425 "data_offset": 0, 00:08:54.425 "data_size": 63488 00:08:54.425 }, 00:08:54.425 { 00:08:54.425 "name": "BaseBdev2", 00:08:54.425 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:54.425 "is_configured": true, 00:08:54.425 "data_offset": 2048, 00:08:54.425 "data_size": 63488 00:08:54.425 }, 00:08:54.425 { 00:08:54.425 "name": "BaseBdev3", 00:08:54.425 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:54.425 "is_configured": true, 00:08:54.425 "data_offset": 2048, 00:08:54.425 "data_size": 63488 00:08:54.425 } 00:08:54.425 ] 00:08:54.425 }' 00:08:54.425 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.425 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7fd0ca75-b8b3-4622-9860-1823a3790959 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.993 [2024-11-06 12:39:43.634478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:54.993 [2024-11-06 12:39:43.634777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.993 [2024-11-06 12:39:43.634803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.993 [2024-11-06 12:39:43.635124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.993 NewBaseBdev 00:08:54.993 [2024-11-06 12:39:43.635342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.993 [2024-11-06 12:39:43.635375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:54.993 [2024-11-06 12:39:43.635545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.993 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:54.994 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.994 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.316 [ 00:08:55.316 { 00:08:55.316 "name": "NewBaseBdev", 00:08:55.316 "aliases": [ 00:08:55.316 "7fd0ca75-b8b3-4622-9860-1823a3790959" 00:08:55.316 ], 00:08:55.316 "product_name": "Malloc disk", 00:08:55.316 "block_size": 512, 00:08:55.316 "num_blocks": 65536, 00:08:55.316 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:55.316 "assigned_rate_limits": { 00:08:55.316 "rw_ios_per_sec": 0, 00:08:55.316 "rw_mbytes_per_sec": 0, 00:08:55.316 "r_mbytes_per_sec": 0, 00:08:55.316 "w_mbytes_per_sec": 0 00:08:55.316 }, 00:08:55.316 "claimed": true, 00:08:55.316 "claim_type": "exclusive_write", 00:08:55.316 "zoned": false, 00:08:55.316 "supported_io_types": { 00:08:55.316 "read": true, 00:08:55.316 "write": true, 00:08:55.316 "unmap": true, 00:08:55.316 "flush": true, 00:08:55.316 "reset": true, 00:08:55.316 "nvme_admin": false, 00:08:55.316 "nvme_io": false, 00:08:55.316 "nvme_io_md": false, 00:08:55.316 "write_zeroes": true, 00:08:55.316 "zcopy": true, 00:08:55.316 "get_zone_info": false, 00:08:55.316 "zone_management": false, 00:08:55.316 "zone_append": false, 00:08:55.316 "compare": false, 00:08:55.316 "compare_and_write": false, 00:08:55.316 "abort": true, 00:08:55.316 "seek_hole": false, 00:08:55.316 "seek_data": false, 00:08:55.316 "copy": true, 00:08:55.316 "nvme_iov_md": false 00:08:55.316 }, 00:08:55.316 "memory_domains": [ 00:08:55.316 { 00:08:55.316 "dma_device_id": "system", 00:08:55.316 "dma_device_type": 1 00:08:55.316 }, 00:08:55.316 { 00:08:55.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.316 "dma_device_type": 2 00:08:55.316 } 00:08:55.316 ], 00:08:55.316 "driver_specific": {} 00:08:55.316 } 00:08:55.316 ] 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.316 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.316 "name": "Existed_Raid", 00:08:55.316 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:55.316 "strip_size_kb": 64, 00:08:55.316 "state": "online", 00:08:55.316 "raid_level": "raid0", 00:08:55.316 "superblock": true, 00:08:55.317 "num_base_bdevs": 3, 00:08:55.317 "num_base_bdevs_discovered": 3, 00:08:55.317 "num_base_bdevs_operational": 3, 00:08:55.317 "base_bdevs_list": [ 00:08:55.317 { 00:08:55.317 "name": "NewBaseBdev", 00:08:55.317 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:55.317 "is_configured": true, 00:08:55.317 "data_offset": 2048, 00:08:55.317 "data_size": 63488 00:08:55.317 }, 00:08:55.317 { 00:08:55.317 "name": "BaseBdev2", 00:08:55.317 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:55.317 "is_configured": true, 00:08:55.317 "data_offset": 2048, 00:08:55.317 "data_size": 63488 00:08:55.317 }, 00:08:55.317 { 00:08:55.317 "name": "BaseBdev3", 00:08:55.317 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:55.317 "is_configured": true, 00:08:55.317 "data_offset": 2048, 00:08:55.317 "data_size": 63488 00:08:55.317 } 00:08:55.317 ] 00:08:55.317 }' 00:08:55.317 12:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.317 12:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.583 [2024-11-06 12:39:44.163046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.583 "name": "Existed_Raid", 00:08:55.583 "aliases": [ 00:08:55.583 "1d7d8db3-5461-473d-9021-cc87370e112b" 00:08:55.583 ], 00:08:55.583 "product_name": "Raid Volume", 00:08:55.583 "block_size": 512, 00:08:55.583 "num_blocks": 190464, 00:08:55.583 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:55.583 "assigned_rate_limits": { 00:08:55.583 "rw_ios_per_sec": 0, 00:08:55.583 "rw_mbytes_per_sec": 0, 00:08:55.583 "r_mbytes_per_sec": 0, 00:08:55.583 "w_mbytes_per_sec": 0 00:08:55.583 }, 00:08:55.583 "claimed": false, 00:08:55.583 "zoned": false, 00:08:55.583 "supported_io_types": { 00:08:55.583 "read": true, 00:08:55.583 "write": true, 00:08:55.583 "unmap": true, 00:08:55.583 "flush": true, 00:08:55.583 "reset": true, 00:08:55.583 "nvme_admin": false, 00:08:55.583 "nvme_io": false, 00:08:55.583 "nvme_io_md": false, 00:08:55.583 "write_zeroes": true, 00:08:55.583 "zcopy": false, 00:08:55.583 "get_zone_info": false, 00:08:55.583 "zone_management": false, 00:08:55.583 "zone_append": false, 00:08:55.583 "compare": false, 00:08:55.583 "compare_and_write": false, 00:08:55.583 "abort": false, 00:08:55.583 "seek_hole": false, 00:08:55.583 "seek_data": false, 00:08:55.583 "copy": false, 00:08:55.583 "nvme_iov_md": false 00:08:55.583 }, 00:08:55.583 "memory_domains": [ 00:08:55.583 { 00:08:55.583 "dma_device_id": "system", 00:08:55.583 "dma_device_type": 1 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.583 "dma_device_type": 2 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "dma_device_id": "system", 00:08:55.583 "dma_device_type": 1 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.583 "dma_device_type": 2 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "dma_device_id": "system", 00:08:55.583 "dma_device_type": 1 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.583 "dma_device_type": 2 00:08:55.583 } 00:08:55.583 ], 00:08:55.583 "driver_specific": { 00:08:55.583 "raid": { 00:08:55.583 "uuid": "1d7d8db3-5461-473d-9021-cc87370e112b", 00:08:55.583 "strip_size_kb": 64, 00:08:55.583 "state": "online", 00:08:55.583 "raid_level": "raid0", 00:08:55.583 "superblock": true, 00:08:55.583 "num_base_bdevs": 3, 00:08:55.583 "num_base_bdevs_discovered": 3, 00:08:55.583 "num_base_bdevs_operational": 3, 00:08:55.583 "base_bdevs_list": [ 00:08:55.583 { 00:08:55.583 "name": "NewBaseBdev", 00:08:55.583 "uuid": "7fd0ca75-b8b3-4622-9860-1823a3790959", 00:08:55.583 "is_configured": true, 00:08:55.583 "data_offset": 2048, 00:08:55.583 "data_size": 63488 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "name": "BaseBdev2", 00:08:55.583 "uuid": "dbc17a00-e508-4f9f-a411-f4857cc305cc", 00:08:55.583 "is_configured": true, 00:08:55.583 "data_offset": 2048, 00:08:55.583 "data_size": 63488 00:08:55.583 }, 00:08:55.583 { 00:08:55.583 "name": "BaseBdev3", 00:08:55.583 "uuid": "91f033fe-5640-4305-8153-81bb4b28d553", 00:08:55.583 "is_configured": true, 00:08:55.583 "data_offset": 2048, 00:08:55.583 "data_size": 63488 00:08:55.583 } 00:08:55.583 ] 00:08:55.583 } 00:08:55.583 } 00:08:55.583 }' 00:08:55.583 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:55.842 BaseBdev2 00:08:55.842 BaseBdev3' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.842 [2024-11-06 12:39:44.486750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.842 [2024-11-06 12:39:44.486792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.842 [2024-11-06 12:39:44.486892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.842 [2024-11-06 12:39:44.486966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.842 [2024-11-06 12:39:44.486986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64453 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64453 ']' 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64453 00:08:55.842 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64453 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.100 killing process with pid 64453 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64453' 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64453 00:08:56.100 [2024-11-06 12:39:44.525578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.100 12:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64453 00:08:56.358 [2024-11-06 12:39:44.792398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.293 12:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:57.293 00:08:57.293 real 0m11.805s 00:08:57.293 user 0m19.605s 00:08:57.293 sys 0m1.635s 00:08:57.293 12:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.293 ************************************ 00:08:57.293 END TEST raid_state_function_test_sb 00:08:57.293 12:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.293 ************************************ 00:08:57.293 12:39:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:57.293 12:39:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:57.293 12:39:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.293 12:39:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.293 ************************************ 00:08:57.293 START TEST raid_superblock_test 00:08:57.293 ************************************ 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65090 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65090 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65090 ']' 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.293 12:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:57.552 [2024-11-06 12:39:45.965662] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:08:57.552 [2024-11-06 12:39:45.965872] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65090 ] 00:08:57.552 [2024-11-06 12:39:46.145645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.811 [2024-11-06 12:39:46.277037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.070 [2024-11-06 12:39:46.483611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.070 [2024-11-06 12:39:46.483690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:58.637 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:58.638 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.638 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.638 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.638 12:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:58.638 12:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 malloc1 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 [2024-11-06 12:39:47.050986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.638 [2024-11-06 12:39:47.051090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.638 [2024-11-06 12:39:47.051127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:58.638 [2024-11-06 12:39:47.051144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.638 [2024-11-06 12:39:47.054168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.638 [2024-11-06 12:39:47.054226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.638 pt1 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 malloc2 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 [2024-11-06 12:39:47.107903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.638 [2024-11-06 12:39:47.107998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.638 [2024-11-06 12:39:47.108063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:58.638 [2024-11-06 12:39:47.108086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.638 [2024-11-06 12:39:47.111018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.638 [2024-11-06 12:39:47.111076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.638 pt2 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 malloc3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 [2024-11-06 12:39:47.190603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:58.638 [2024-11-06 12:39:47.190697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.638 [2024-11-06 12:39:47.190753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:58.638 [2024-11-06 12:39:47.190791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.638 [2024-11-06 12:39:47.194379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.638 [2024-11-06 12:39:47.194435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:58.638 pt3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 [2024-11-06 12:39:47.202782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.638 [2024-11-06 12:39:47.206012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.638 [2024-11-06 12:39:47.206145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:58.638 [2024-11-06 12:39:47.206476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:58.638 [2024-11-06 12:39:47.206515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:58.638 [2024-11-06 12:39:47.206947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:58.638 [2024-11-06 12:39:47.207288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:58.638 [2024-11-06 12:39:47.207318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:58.638 [2024-11-06 12:39:47.207662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.638 "name": "raid_bdev1", 00:08:58.638 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:08:58.638 "strip_size_kb": 64, 00:08:58.638 "state": "online", 00:08:58.638 "raid_level": "raid0", 00:08:58.638 "superblock": true, 00:08:58.638 "num_base_bdevs": 3, 00:08:58.638 "num_base_bdevs_discovered": 3, 00:08:58.638 "num_base_bdevs_operational": 3, 00:08:58.638 "base_bdevs_list": [ 00:08:58.638 { 00:08:58.638 "name": "pt1", 00:08:58.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.638 "is_configured": true, 00:08:58.638 "data_offset": 2048, 00:08:58.638 "data_size": 63488 00:08:58.638 }, 00:08:58.638 { 00:08:58.638 "name": "pt2", 00:08:58.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.638 "is_configured": true, 00:08:58.638 "data_offset": 2048, 00:08:58.638 "data_size": 63488 00:08:58.638 }, 00:08:58.638 { 00:08:58.638 "name": "pt3", 00:08:58.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.638 "is_configured": true, 00:08:58.638 "data_offset": 2048, 00:08:58.638 "data_size": 63488 00:08:58.638 } 00:08:58.638 ] 00:08:58.638 }' 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.638 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.259 [2024-11-06 12:39:47.756119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.259 "name": "raid_bdev1", 00:08:59.259 "aliases": [ 00:08:59.259 "fe481b0b-7544-46bd-9533-b7d573aff4cf" 00:08:59.259 ], 00:08:59.259 "product_name": "Raid Volume", 00:08:59.259 "block_size": 512, 00:08:59.259 "num_blocks": 190464, 00:08:59.259 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:08:59.259 "assigned_rate_limits": { 00:08:59.259 "rw_ios_per_sec": 0, 00:08:59.259 "rw_mbytes_per_sec": 0, 00:08:59.259 "r_mbytes_per_sec": 0, 00:08:59.259 "w_mbytes_per_sec": 0 00:08:59.259 }, 00:08:59.259 "claimed": false, 00:08:59.259 "zoned": false, 00:08:59.259 "supported_io_types": { 00:08:59.259 "read": true, 00:08:59.259 "write": true, 00:08:59.259 "unmap": true, 00:08:59.259 "flush": true, 00:08:59.259 "reset": true, 00:08:59.259 "nvme_admin": false, 00:08:59.259 "nvme_io": false, 00:08:59.259 "nvme_io_md": false, 00:08:59.259 "write_zeroes": true, 00:08:59.259 "zcopy": false, 00:08:59.259 "get_zone_info": false, 00:08:59.259 "zone_management": false, 00:08:59.259 "zone_append": false, 00:08:59.259 "compare": false, 00:08:59.259 "compare_and_write": false, 00:08:59.259 "abort": false, 00:08:59.259 "seek_hole": false, 00:08:59.259 "seek_data": false, 00:08:59.259 "copy": false, 00:08:59.259 "nvme_iov_md": false 00:08:59.259 }, 00:08:59.259 "memory_domains": [ 00:08:59.259 { 00:08:59.259 "dma_device_id": "system", 00:08:59.259 "dma_device_type": 1 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.259 "dma_device_type": 2 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "dma_device_id": "system", 00:08:59.259 "dma_device_type": 1 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.259 "dma_device_type": 2 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "dma_device_id": "system", 00:08:59.259 "dma_device_type": 1 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.259 "dma_device_type": 2 00:08:59.259 } 00:08:59.259 ], 00:08:59.259 "driver_specific": { 00:08:59.259 "raid": { 00:08:59.259 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:08:59.259 "strip_size_kb": 64, 00:08:59.259 "state": "online", 00:08:59.259 "raid_level": "raid0", 00:08:59.259 "superblock": true, 00:08:59.259 "num_base_bdevs": 3, 00:08:59.259 "num_base_bdevs_discovered": 3, 00:08:59.259 "num_base_bdevs_operational": 3, 00:08:59.259 "base_bdevs_list": [ 00:08:59.259 { 00:08:59.259 "name": "pt1", 00:08:59.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.259 "is_configured": true, 00:08:59.259 "data_offset": 2048, 00:08:59.259 "data_size": 63488 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "name": "pt2", 00:08:59.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.259 "is_configured": true, 00:08:59.259 "data_offset": 2048, 00:08:59.259 "data_size": 63488 00:08:59.259 }, 00:08:59.259 { 00:08:59.259 "name": "pt3", 00:08:59.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.259 "is_configured": true, 00:08:59.259 "data_offset": 2048, 00:08:59.259 "data_size": 63488 00:08:59.259 } 00:08:59.259 ] 00:08:59.259 } 00:08:59.259 } 00:08:59.259 }' 00:08:59.259 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.260 pt2 00:08:59.260 pt3' 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.260 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:59.519 12:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 [2024-11-06 12:39:48.056150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fe481b0b-7544-46bd-9533-b7d573aff4cf 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fe481b0b-7544-46bd-9533-b7d573aff4cf ']' 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 [2024-11-06 12:39:48.107800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.519 [2024-11-06 12:39:48.107843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.519 [2024-11-06 12:39:48.107947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.519 [2024-11-06 12:39:48.108028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.519 [2024-11-06 12:39:48.108045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:59.778 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 [2024-11-06 12:39:48.251937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:59.779 [2024-11-06 12:39:48.254492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:59.779 [2024-11-06 12:39:48.254582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:59.779 [2024-11-06 12:39:48.254654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:59.779 [2024-11-06 12:39:48.254741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:59.779 [2024-11-06 12:39:48.254797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:59.779 [2024-11-06 12:39:48.254836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.779 [2024-11-06 12:39:48.254876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:59.779 request: 00:08:59.779 { 00:08:59.779 "name": "raid_bdev1", 00:08:59.779 "raid_level": "raid0", 00:08:59.779 "base_bdevs": [ 00:08:59.779 "malloc1", 00:08:59.779 "malloc2", 00:08:59.779 "malloc3" 00:08:59.779 ], 00:08:59.779 "strip_size_kb": 64, 00:08:59.779 "superblock": false, 00:08:59.779 "method": "bdev_raid_create", 00:08:59.779 "req_id": 1 00:08:59.779 } 00:08:59.779 Got JSON-RPC error response 00:08:59.779 response: 00:08:59.779 { 00:08:59.779 "code": -17, 00:08:59.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:59.779 } 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 [2024-11-06 12:39:48.311936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.779 [2024-11-06 12:39:48.312035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.779 [2024-11-06 12:39:48.312081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:59.779 [2024-11-06 12:39:48.312104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.779 [2024-11-06 12:39:48.315293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.779 [2024-11-06 12:39:48.315336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.779 [2024-11-06 12:39:48.315526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:59.779 [2024-11-06 12:39:48.315618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.779 pt1 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.779 "name": "raid_bdev1", 00:08:59.779 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:08:59.779 "strip_size_kb": 64, 00:08:59.779 "state": "configuring", 00:08:59.779 "raid_level": "raid0", 00:08:59.779 "superblock": true, 00:08:59.779 "num_base_bdevs": 3, 00:08:59.779 "num_base_bdevs_discovered": 1, 00:08:59.779 "num_base_bdevs_operational": 3, 00:08:59.779 "base_bdevs_list": [ 00:08:59.779 { 00:08:59.779 "name": "pt1", 00:08:59.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.779 "is_configured": true, 00:08:59.779 "data_offset": 2048, 00:08:59.779 "data_size": 63488 00:08:59.779 }, 00:08:59.779 { 00:08:59.779 "name": null, 00:08:59.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.779 "is_configured": false, 00:08:59.779 "data_offset": 2048, 00:08:59.779 "data_size": 63488 00:08:59.779 }, 00:08:59.779 { 00:08:59.779 "name": null, 00:08:59.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.779 "is_configured": false, 00:08:59.779 "data_offset": 2048, 00:08:59.779 "data_size": 63488 00:08:59.779 } 00:08:59.779 ] 00:08:59.779 }' 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.779 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.347 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:00.347 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.347 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.347 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.347 [2024-11-06 12:39:48.844085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.347 [2024-11-06 12:39:48.844176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.347 [2024-11-06 12:39:48.844276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:00.347 [2024-11-06 12:39:48.844301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.347 [2024-11-06 12:39:48.844971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.347 [2024-11-06 12:39:48.845031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.347 [2024-11-06 12:39:48.845207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.347 [2024-11-06 12:39:48.845250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.347 pt2 00:09:00.347 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.348 [2024-11-06 12:39:48.852077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.348 "name": "raid_bdev1", 00:09:00.348 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:09:00.348 "strip_size_kb": 64, 00:09:00.348 "state": "configuring", 00:09:00.348 "raid_level": "raid0", 00:09:00.348 "superblock": true, 00:09:00.348 "num_base_bdevs": 3, 00:09:00.348 "num_base_bdevs_discovered": 1, 00:09:00.348 "num_base_bdevs_operational": 3, 00:09:00.348 "base_bdevs_list": [ 00:09:00.348 { 00:09:00.348 "name": "pt1", 00:09:00.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.348 "is_configured": true, 00:09:00.348 "data_offset": 2048, 00:09:00.348 "data_size": 63488 00:09:00.348 }, 00:09:00.348 { 00:09:00.348 "name": null, 00:09:00.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.348 "is_configured": false, 00:09:00.348 "data_offset": 0, 00:09:00.348 "data_size": 63488 00:09:00.348 }, 00:09:00.348 { 00:09:00.348 "name": null, 00:09:00.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.348 "is_configured": false, 00:09:00.348 "data_offset": 2048, 00:09:00.348 "data_size": 63488 00:09:00.348 } 00:09:00.348 ] 00:09:00.348 }' 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.348 12:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 [2024-11-06 12:39:49.392275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.915 [2024-11-06 12:39:49.392387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.915 [2024-11-06 12:39:49.392421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:00.915 [2024-11-06 12:39:49.392440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.915 [2024-11-06 12:39:49.393129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.915 [2024-11-06 12:39:49.393176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.915 [2024-11-06 12:39:49.393323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:00.915 [2024-11-06 12:39:49.393375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.915 pt2 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 [2024-11-06 12:39:49.400246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:00.915 [2024-11-06 12:39:49.400321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.915 [2024-11-06 12:39:49.400349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:00.915 [2024-11-06 12:39:49.400368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.915 [2024-11-06 12:39:49.401016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.915 [2024-11-06 12:39:49.401073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:00.915 [2024-11-06 12:39:49.401178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:00.915 [2024-11-06 12:39:49.401242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:00.915 [2024-11-06 12:39:49.401417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.915 [2024-11-06 12:39:49.401451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.915 [2024-11-06 12:39:49.401784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:00.915 [2024-11-06 12:39:49.401997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.915 [2024-11-06 12:39:49.402023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:00.915 [2024-11-06 12:39:49.402230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.915 pt3 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.915 "name": "raid_bdev1", 00:09:00.915 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:09:00.915 "strip_size_kb": 64, 00:09:00.915 "state": "online", 00:09:00.915 "raid_level": "raid0", 00:09:00.915 "superblock": true, 00:09:00.915 "num_base_bdevs": 3, 00:09:00.915 "num_base_bdevs_discovered": 3, 00:09:00.915 "num_base_bdevs_operational": 3, 00:09:00.915 "base_bdevs_list": [ 00:09:00.915 { 00:09:00.915 "name": "pt1", 00:09:00.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.915 "is_configured": true, 00:09:00.915 "data_offset": 2048, 00:09:00.915 "data_size": 63488 00:09:00.915 }, 00:09:00.915 { 00:09:00.915 "name": "pt2", 00:09:00.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.915 "is_configured": true, 00:09:00.915 "data_offset": 2048, 00:09:00.915 "data_size": 63488 00:09:00.915 }, 00:09:00.915 { 00:09:00.915 "name": "pt3", 00:09:00.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.915 "is_configured": true, 00:09:00.915 "data_offset": 2048, 00:09:00.915 "data_size": 63488 00:09:00.915 } 00:09:00.915 ] 00:09:00.915 }' 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.915 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.485 [2024-11-06 12:39:49.948857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.485 "name": "raid_bdev1", 00:09:01.485 "aliases": [ 00:09:01.485 "fe481b0b-7544-46bd-9533-b7d573aff4cf" 00:09:01.485 ], 00:09:01.485 "product_name": "Raid Volume", 00:09:01.485 "block_size": 512, 00:09:01.485 "num_blocks": 190464, 00:09:01.485 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:09:01.485 "assigned_rate_limits": { 00:09:01.485 "rw_ios_per_sec": 0, 00:09:01.485 "rw_mbytes_per_sec": 0, 00:09:01.485 "r_mbytes_per_sec": 0, 00:09:01.485 "w_mbytes_per_sec": 0 00:09:01.485 }, 00:09:01.485 "claimed": false, 00:09:01.485 "zoned": false, 00:09:01.485 "supported_io_types": { 00:09:01.485 "read": true, 00:09:01.485 "write": true, 00:09:01.485 "unmap": true, 00:09:01.485 "flush": true, 00:09:01.485 "reset": true, 00:09:01.485 "nvme_admin": false, 00:09:01.485 "nvme_io": false, 00:09:01.485 "nvme_io_md": false, 00:09:01.485 "write_zeroes": true, 00:09:01.485 "zcopy": false, 00:09:01.485 "get_zone_info": false, 00:09:01.485 "zone_management": false, 00:09:01.485 "zone_append": false, 00:09:01.485 "compare": false, 00:09:01.485 "compare_and_write": false, 00:09:01.485 "abort": false, 00:09:01.485 "seek_hole": false, 00:09:01.485 "seek_data": false, 00:09:01.485 "copy": false, 00:09:01.485 "nvme_iov_md": false 00:09:01.485 }, 00:09:01.485 "memory_domains": [ 00:09:01.485 { 00:09:01.485 "dma_device_id": "system", 00:09:01.485 "dma_device_type": 1 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.485 "dma_device_type": 2 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "dma_device_id": "system", 00:09:01.485 "dma_device_type": 1 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.485 "dma_device_type": 2 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "dma_device_id": "system", 00:09:01.485 "dma_device_type": 1 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.485 "dma_device_type": 2 00:09:01.485 } 00:09:01.485 ], 00:09:01.485 "driver_specific": { 00:09:01.485 "raid": { 00:09:01.485 "uuid": "fe481b0b-7544-46bd-9533-b7d573aff4cf", 00:09:01.485 "strip_size_kb": 64, 00:09:01.485 "state": "online", 00:09:01.485 "raid_level": "raid0", 00:09:01.485 "superblock": true, 00:09:01.485 "num_base_bdevs": 3, 00:09:01.485 "num_base_bdevs_discovered": 3, 00:09:01.485 "num_base_bdevs_operational": 3, 00:09:01.485 "base_bdevs_list": [ 00:09:01.485 { 00:09:01.485 "name": "pt1", 00:09:01.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.485 "is_configured": true, 00:09:01.485 "data_offset": 2048, 00:09:01.485 "data_size": 63488 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "name": "pt2", 00:09:01.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.485 "is_configured": true, 00:09:01.485 "data_offset": 2048, 00:09:01.485 "data_size": 63488 00:09:01.485 }, 00:09:01.485 { 00:09:01.485 "name": "pt3", 00:09:01.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.485 "is_configured": true, 00:09:01.485 "data_offset": 2048, 00:09:01.485 "data_size": 63488 00:09:01.485 } 00:09:01.485 ] 00:09:01.485 } 00:09:01.485 } 00:09:01.485 }' 00:09:01.485 12:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:01.485 pt2 00:09:01.485 pt3' 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.485 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:01.743 [2024-11-06 12:39:50.268913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fe481b0b-7544-46bd-9533-b7d573aff4cf '!=' fe481b0b-7544-46bd-9533-b7d573aff4cf ']' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65090 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65090 ']' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65090 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65090 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.743 killing process with pid 65090 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65090' 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65090 00:09:01.743 12:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65090 00:09:01.743 [2024-11-06 12:39:50.344923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.743 [2024-11-06 12:39:50.345103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.743 [2024-11-06 12:39:50.345227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.743 [2024-11-06 12:39:50.345252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:02.001 [2024-11-06 12:39:50.642024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.375 12:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:03.375 00:09:03.375 real 0m5.913s 00:09:03.375 user 0m8.844s 00:09:03.375 sys 0m0.881s 00:09:03.375 12:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.375 12:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 ************************************ 00:09:03.375 END TEST raid_superblock_test 00:09:03.375 ************************************ 00:09:03.375 12:39:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:03.375 12:39:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:03.375 12:39:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.375 12:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 ************************************ 00:09:03.375 START TEST raid_read_error_test 00:09:03.375 ************************************ 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qmKGx93y2F 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65349 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65349 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65349 ']' 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.375 12:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 [2024-11-06 12:39:51.939045] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:03.375 [2024-11-06 12:39:51.939253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65349 ] 00:09:03.633 [2024-11-06 12:39:52.127805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.633 [2024-11-06 12:39:52.279588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.891 [2024-11-06 12:39:52.481320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.891 [2024-11-06 12:39:52.481401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.457 BaseBdev1_malloc 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.457 true 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.457 [2024-11-06 12:39:53.089992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:04.457 [2024-11-06 12:39:53.090075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.457 [2024-11-06 12:39:53.090107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:04.457 [2024-11-06 12:39:53.090127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.457 [2024-11-06 12:39:53.092955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.457 [2024-11-06 12:39:53.093007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:04.457 BaseBdev1 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.457 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 BaseBdev2_malloc 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 true 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 [2024-11-06 12:39:53.150440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.715 [2024-11-06 12:39:53.150526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.715 [2024-11-06 12:39:53.150554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:04.715 [2024-11-06 12:39:53.150572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.715 [2024-11-06 12:39:53.153504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.715 [2024-11-06 12:39:53.153559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.715 BaseBdev2 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 BaseBdev3_malloc 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 true 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 [2024-11-06 12:39:53.229039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:04.715 [2024-11-06 12:39:53.229116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.715 [2024-11-06 12:39:53.229145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:04.715 [2024-11-06 12:39:53.229163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.715 [2024-11-06 12:39:53.231967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.715 [2024-11-06 12:39:53.232013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:04.715 BaseBdev3 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.715 [2024-11-06 12:39:53.237125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.715 [2024-11-06 12:39:53.239550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.715 [2024-11-06 12:39:53.239669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.715 [2024-11-06 12:39:53.239940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.715 [2024-11-06 12:39:53.239969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.715 [2024-11-06 12:39:53.240313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:04.715 [2024-11-06 12:39:53.240534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.715 [2024-11-06 12:39:53.240564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:04.715 [2024-11-06 12:39:53.240767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.715 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.716 "name": "raid_bdev1", 00:09:04.716 "uuid": "18e8c85e-dfac-4bdd-a1c9-213a8bf150bd", 00:09:04.716 "strip_size_kb": 64, 00:09:04.716 "state": "online", 00:09:04.716 "raid_level": "raid0", 00:09:04.716 "superblock": true, 00:09:04.716 "num_base_bdevs": 3, 00:09:04.716 "num_base_bdevs_discovered": 3, 00:09:04.716 "num_base_bdevs_operational": 3, 00:09:04.716 "base_bdevs_list": [ 00:09:04.716 { 00:09:04.716 "name": "BaseBdev1", 00:09:04.716 "uuid": "85922542-0e7f-5536-82d3-c77a778f00b7", 00:09:04.716 "is_configured": true, 00:09:04.716 "data_offset": 2048, 00:09:04.716 "data_size": 63488 00:09:04.716 }, 00:09:04.716 { 00:09:04.716 "name": "BaseBdev2", 00:09:04.716 "uuid": "f7d07bc5-a6c9-5d4d-9beb-2a0b74a7b4ca", 00:09:04.716 "is_configured": true, 00:09:04.716 "data_offset": 2048, 00:09:04.716 "data_size": 63488 00:09:04.716 }, 00:09:04.716 { 00:09:04.716 "name": "BaseBdev3", 00:09:04.716 "uuid": "5504970e-d307-59f0-b0ae-541fdea78a3b", 00:09:04.716 "is_configured": true, 00:09:04.716 "data_offset": 2048, 00:09:04.716 "data_size": 63488 00:09:04.716 } 00:09:04.716 ] 00:09:04.716 }' 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.716 12:39:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.283 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.283 12:39:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:05.283 [2024-11-06 12:39:53.886686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.216 "name": "raid_bdev1", 00:09:06.216 "uuid": "18e8c85e-dfac-4bdd-a1c9-213a8bf150bd", 00:09:06.216 "strip_size_kb": 64, 00:09:06.216 "state": "online", 00:09:06.216 "raid_level": "raid0", 00:09:06.216 "superblock": true, 00:09:06.216 "num_base_bdevs": 3, 00:09:06.216 "num_base_bdevs_discovered": 3, 00:09:06.216 "num_base_bdevs_operational": 3, 00:09:06.216 "base_bdevs_list": [ 00:09:06.216 { 00:09:06.216 "name": "BaseBdev1", 00:09:06.216 "uuid": "85922542-0e7f-5536-82d3-c77a778f00b7", 00:09:06.216 "is_configured": true, 00:09:06.216 "data_offset": 2048, 00:09:06.216 "data_size": 63488 00:09:06.216 }, 00:09:06.216 { 00:09:06.216 "name": "BaseBdev2", 00:09:06.216 "uuid": "f7d07bc5-a6c9-5d4d-9beb-2a0b74a7b4ca", 00:09:06.216 "is_configured": true, 00:09:06.216 "data_offset": 2048, 00:09:06.216 "data_size": 63488 00:09:06.216 }, 00:09:06.216 { 00:09:06.216 "name": "BaseBdev3", 00:09:06.216 "uuid": "5504970e-d307-59f0-b0ae-541fdea78a3b", 00:09:06.216 "is_configured": true, 00:09:06.216 "data_offset": 2048, 00:09:06.216 "data_size": 63488 00:09:06.216 } 00:09:06.216 ] 00:09:06.216 }' 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.216 12:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.784 [2024-11-06 12:39:55.306038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.784 [2024-11-06 12:39:55.306096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.784 [2024-11-06 12:39:55.309414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.784 [2024-11-06 12:39:55.309474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.784 [2024-11-06 12:39:55.309528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.784 [2024-11-06 12:39:55.309544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:06.784 { 00:09:06.784 "results": [ 00:09:06.784 { 00:09:06.784 "job": "raid_bdev1", 00:09:06.784 "core_mask": "0x1", 00:09:06.784 "workload": "randrw", 00:09:06.784 "percentage": 50, 00:09:06.784 "status": "finished", 00:09:06.784 "queue_depth": 1, 00:09:06.784 "io_size": 131072, 00:09:06.784 "runtime": 1.416895, 00:09:06.784 "iops": 10638.755871112538, 00:09:06.784 "mibps": 1329.8444838890673, 00:09:06.784 "io_failed": 1, 00:09:06.784 "io_timeout": 0, 00:09:06.784 "avg_latency_us": 131.22452706166138, 00:09:06.784 "min_latency_us": 35.60727272727273, 00:09:06.784 "max_latency_us": 1824.581818181818 00:09:06.784 } 00:09:06.784 ], 00:09:06.784 "core_count": 1 00:09:06.784 } 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65349 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65349 ']' 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65349 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65349 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65349' 00:09:06.784 killing process with pid 65349 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65349 00:09:06.784 [2024-11-06 12:39:55.345635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.784 12:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65349 00:09:07.044 [2024-11-06 12:39:55.556608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qmKGx93y2F 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.419 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.420 12:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:08.420 00:09:08.420 real 0m4.917s 00:09:08.420 user 0m6.176s 00:09:08.420 sys 0m0.569s 00:09:08.420 12:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.420 12:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.420 ************************************ 00:09:08.420 END TEST raid_read_error_test 00:09:08.420 ************************************ 00:09:08.420 12:39:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:08.420 12:39:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:08.420 12:39:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.420 12:39:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.420 ************************************ 00:09:08.420 START TEST raid_write_error_test 00:09:08.420 ************************************ 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RYXjQpuHdL 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65494 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65494 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65494 ']' 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.420 12:39:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.420 [2024-11-06 12:39:56.916155] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:08.420 [2024-11-06 12:39:56.916357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65494 ] 00:09:08.679 [2024-11-06 12:39:57.097173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.679 [2024-11-06 12:39:57.243879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.938 [2024-11-06 12:39:57.467740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.938 [2024-11-06 12:39:57.467837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.506 BaseBdev1_malloc 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.506 true 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.506 [2024-11-06 12:39:57.990838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.506 [2024-11-06 12:39:57.990924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.506 [2024-11-06 12:39:57.990956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.506 [2024-11-06 12:39:57.990974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.506 [2024-11-06 12:39:57.993994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.506 [2024-11-06 12:39:57.994065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:09.506 BaseBdev1 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.506 12:39:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.506 BaseBdev2_malloc 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.506 true 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.506 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.506 [2024-11-06 12:39:58.051295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.506 [2024-11-06 12:39:58.051381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.507 [2024-11-06 12:39:58.051410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.507 [2024-11-06 12:39:58.051429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.507 [2024-11-06 12:39:58.054380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.507 [2024-11-06 12:39:58.054433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.507 BaseBdev2 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.507 BaseBdev3_malloc 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.507 true 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.507 [2024-11-06 12:39:58.122935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:09.507 [2024-11-06 12:39:58.123008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.507 [2024-11-06 12:39:58.123038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:09.507 [2024-11-06 12:39:58.123056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.507 [2024-11-06 12:39:58.126022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.507 [2024-11-06 12:39:58.126090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:09.507 BaseBdev3 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.507 [2024-11-06 12:39:58.131033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.507 [2024-11-06 12:39:58.133632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.507 [2024-11-06 12:39:58.133756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.507 [2024-11-06 12:39:58.134027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:09.507 [2024-11-06 12:39:58.134060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:09.507 [2024-11-06 12:39:58.134392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:09.507 [2024-11-06 12:39:58.134632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:09.507 [2024-11-06 12:39:58.134664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:09.507 [2024-11-06 12:39:58.134850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.507 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.766 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.766 "name": "raid_bdev1", 00:09:09.766 "uuid": "25b5c20d-6792-4146-a045-92ce883826bb", 00:09:09.766 "strip_size_kb": 64, 00:09:09.766 "state": "online", 00:09:09.766 "raid_level": "raid0", 00:09:09.766 "superblock": true, 00:09:09.766 "num_base_bdevs": 3, 00:09:09.766 "num_base_bdevs_discovered": 3, 00:09:09.766 "num_base_bdevs_operational": 3, 00:09:09.766 "base_bdevs_list": [ 00:09:09.766 { 00:09:09.766 "name": "BaseBdev1", 00:09:09.766 "uuid": "67db1740-4e97-5b57-b2a0-a69824de0c50", 00:09:09.766 "is_configured": true, 00:09:09.766 "data_offset": 2048, 00:09:09.766 "data_size": 63488 00:09:09.766 }, 00:09:09.766 { 00:09:09.766 "name": "BaseBdev2", 00:09:09.766 "uuid": "36f492d1-dfea-5f0d-83d7-98f24fafa5a0", 00:09:09.766 "is_configured": true, 00:09:09.766 "data_offset": 2048, 00:09:09.766 "data_size": 63488 00:09:09.766 }, 00:09:09.766 { 00:09:09.766 "name": "BaseBdev3", 00:09:09.766 "uuid": "c2666361-19d3-5553-be1c-0e180bb0f760", 00:09:09.766 "is_configured": true, 00:09:09.766 "data_offset": 2048, 00:09:09.766 "data_size": 63488 00:09:09.766 } 00:09:09.766 ] 00:09:09.766 }' 00:09:09.766 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.766 12:39:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.026 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.026 12:39:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.284 [2024-11-06 12:39:58.760770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.220 "name": "raid_bdev1", 00:09:11.220 "uuid": "25b5c20d-6792-4146-a045-92ce883826bb", 00:09:11.220 "strip_size_kb": 64, 00:09:11.220 "state": "online", 00:09:11.220 "raid_level": "raid0", 00:09:11.220 "superblock": true, 00:09:11.220 "num_base_bdevs": 3, 00:09:11.220 "num_base_bdevs_discovered": 3, 00:09:11.220 "num_base_bdevs_operational": 3, 00:09:11.220 "base_bdevs_list": [ 00:09:11.220 { 00:09:11.220 "name": "BaseBdev1", 00:09:11.220 "uuid": "67db1740-4e97-5b57-b2a0-a69824de0c50", 00:09:11.220 "is_configured": true, 00:09:11.220 "data_offset": 2048, 00:09:11.220 "data_size": 63488 00:09:11.220 }, 00:09:11.220 { 00:09:11.220 "name": "BaseBdev2", 00:09:11.220 "uuid": "36f492d1-dfea-5f0d-83d7-98f24fafa5a0", 00:09:11.220 "is_configured": true, 00:09:11.220 "data_offset": 2048, 00:09:11.220 "data_size": 63488 00:09:11.220 }, 00:09:11.220 { 00:09:11.220 "name": "BaseBdev3", 00:09:11.220 "uuid": "c2666361-19d3-5553-be1c-0e180bb0f760", 00:09:11.220 "is_configured": true, 00:09:11.220 "data_offset": 2048, 00:09:11.220 "data_size": 63488 00:09:11.220 } 00:09:11.220 ] 00:09:11.220 }' 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.220 12:39:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.787 [2024-11-06 12:40:00.179340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.787 [2024-11-06 12:40:00.179393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.787 [2024-11-06 12:40:00.182743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.787 [2024-11-06 12:40:00.182811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.787 [2024-11-06 12:40:00.182871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.787 [2024-11-06 12:40:00.182887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:11.787 { 00:09:11.787 "results": [ 00:09:11.787 { 00:09:11.787 "job": "raid_bdev1", 00:09:11.787 "core_mask": "0x1", 00:09:11.787 "workload": "randrw", 00:09:11.787 "percentage": 50, 00:09:11.787 "status": "finished", 00:09:11.787 "queue_depth": 1, 00:09:11.787 "io_size": 131072, 00:09:11.787 "runtime": 1.415918, 00:09:11.787 "iops": 9916.534714580928, 00:09:11.787 "mibps": 1239.566839322616, 00:09:11.787 "io_failed": 1, 00:09:11.787 "io_timeout": 0, 00:09:11.787 "avg_latency_us": 141.5759205500382, 00:09:11.787 "min_latency_us": 43.52, 00:09:11.787 "max_latency_us": 1846.9236363636364 00:09:11.787 } 00:09:11.787 ], 00:09:11.787 "core_count": 1 00:09:11.787 } 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65494 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65494 ']' 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65494 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65494 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:11.787 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:11.788 killing process with pid 65494 00:09:11.788 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65494' 00:09:11.788 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65494 00:09:11.788 [2024-11-06 12:40:00.218211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.788 12:40:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65494 00:09:11.788 [2024-11-06 12:40:00.439703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RYXjQpuHdL 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:13.164 ************************************ 00:09:13.164 END TEST raid_write_error_test 00:09:13.164 ************************************ 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:13.164 00:09:13.164 real 0m4.829s 00:09:13.164 user 0m5.938s 00:09:13.164 sys 0m0.632s 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.164 12:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.164 12:40:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:13.164 12:40:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:13.164 12:40:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:13.164 12:40:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.164 12:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.164 ************************************ 00:09:13.164 START TEST raid_state_function_test 00:09:13.164 ************************************ 00:09:13.164 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:13.164 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:13.164 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:13.164 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:13.164 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.164 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65638 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.165 Process raid pid: 65638 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65638' 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65638 00:09:13.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65638 ']' 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.165 12:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.165 [2024-11-06 12:40:01.791312] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:13.165 [2024-11-06 12:40:01.791497] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.423 [2024-11-06 12:40:01.973054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.681 [2024-11-06 12:40:02.126166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.939 [2024-11-06 12:40:02.372959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.939 [2024-11-06 12:40:02.373048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.196 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.196 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:14.196 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.196 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.196 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.196 [2024-11-06 12:40:02.803267] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.196 [2024-11-06 12:40:02.803504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.196 [2024-11-06 12:40:02.803537] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.196 [2024-11-06 12:40:02.803557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.196 [2024-11-06 12:40:02.803568] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.196 [2024-11-06 12:40:02.803585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.197 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.455 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.455 "name": "Existed_Raid", 00:09:14.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.455 "strip_size_kb": 64, 00:09:14.455 "state": "configuring", 00:09:14.455 "raid_level": "concat", 00:09:14.455 "superblock": false, 00:09:14.455 "num_base_bdevs": 3, 00:09:14.455 "num_base_bdevs_discovered": 0, 00:09:14.455 "num_base_bdevs_operational": 3, 00:09:14.455 "base_bdevs_list": [ 00:09:14.455 { 00:09:14.455 "name": "BaseBdev1", 00:09:14.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.455 "is_configured": false, 00:09:14.455 "data_offset": 0, 00:09:14.455 "data_size": 0 00:09:14.455 }, 00:09:14.455 { 00:09:14.455 "name": "BaseBdev2", 00:09:14.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.455 "is_configured": false, 00:09:14.455 "data_offset": 0, 00:09:14.455 "data_size": 0 00:09:14.455 }, 00:09:14.455 { 00:09:14.455 "name": "BaseBdev3", 00:09:14.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.455 "is_configured": false, 00:09:14.455 "data_offset": 0, 00:09:14.455 "data_size": 0 00:09:14.455 } 00:09:14.455 ] 00:09:14.455 }' 00:09:14.455 12:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.455 12:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.713 [2024-11-06 12:40:03.307328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.713 [2024-11-06 12:40:03.307424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.713 [2024-11-06 12:40:03.319272] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.713 [2024-11-06 12:40:03.319489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.713 [2024-11-06 12:40:03.319621] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.713 [2024-11-06 12:40:03.319685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.713 [2024-11-06 12:40:03.319889] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.713 [2024-11-06 12:40:03.319953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.713 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.713 [2024-11-06 12:40:03.368294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.713 BaseBdev1 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 [ 00:09:14.972 { 00:09:14.972 "name": "BaseBdev1", 00:09:14.972 "aliases": [ 00:09:14.972 "8fcbc4cb-8100-46b8-b825-5b69d4721b6d" 00:09:14.972 ], 00:09:14.972 "product_name": "Malloc disk", 00:09:14.972 "block_size": 512, 00:09:14.972 "num_blocks": 65536, 00:09:14.972 "uuid": "8fcbc4cb-8100-46b8-b825-5b69d4721b6d", 00:09:14.972 "assigned_rate_limits": { 00:09:14.972 "rw_ios_per_sec": 0, 00:09:14.972 "rw_mbytes_per_sec": 0, 00:09:14.972 "r_mbytes_per_sec": 0, 00:09:14.972 "w_mbytes_per_sec": 0 00:09:14.972 }, 00:09:14.972 "claimed": true, 00:09:14.972 "claim_type": "exclusive_write", 00:09:14.972 "zoned": false, 00:09:14.972 "supported_io_types": { 00:09:14.972 "read": true, 00:09:14.972 "write": true, 00:09:14.972 "unmap": true, 00:09:14.972 "flush": true, 00:09:14.972 "reset": true, 00:09:14.972 "nvme_admin": false, 00:09:14.972 "nvme_io": false, 00:09:14.972 "nvme_io_md": false, 00:09:14.972 "write_zeroes": true, 00:09:14.972 "zcopy": true, 00:09:14.972 "get_zone_info": false, 00:09:14.972 "zone_management": false, 00:09:14.972 "zone_append": false, 00:09:14.972 "compare": false, 00:09:14.972 "compare_and_write": false, 00:09:14.972 "abort": true, 00:09:14.972 "seek_hole": false, 00:09:14.972 "seek_data": false, 00:09:14.972 "copy": true, 00:09:14.972 "nvme_iov_md": false 00:09:14.972 }, 00:09:14.972 "memory_domains": [ 00:09:14.972 { 00:09:14.972 "dma_device_id": "system", 00:09:14.972 "dma_device_type": 1 00:09:14.972 }, 00:09:14.972 { 00:09:14.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.972 "dma_device_type": 2 00:09:14.972 } 00:09:14.972 ], 00:09:14.972 "driver_specific": {} 00:09:14.972 } 00:09:14.972 ] 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.972 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.972 "name": "Existed_Raid", 00:09:14.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.972 "strip_size_kb": 64, 00:09:14.972 "state": "configuring", 00:09:14.972 "raid_level": "concat", 00:09:14.972 "superblock": false, 00:09:14.972 "num_base_bdevs": 3, 00:09:14.972 "num_base_bdevs_discovered": 1, 00:09:14.973 "num_base_bdevs_operational": 3, 00:09:14.973 "base_bdevs_list": [ 00:09:14.973 { 00:09:14.973 "name": "BaseBdev1", 00:09:14.973 "uuid": "8fcbc4cb-8100-46b8-b825-5b69d4721b6d", 00:09:14.973 "is_configured": true, 00:09:14.973 "data_offset": 0, 00:09:14.973 "data_size": 65536 00:09:14.973 }, 00:09:14.973 { 00:09:14.973 "name": "BaseBdev2", 00:09:14.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.973 "is_configured": false, 00:09:14.973 "data_offset": 0, 00:09:14.973 "data_size": 0 00:09:14.973 }, 00:09:14.973 { 00:09:14.973 "name": "BaseBdev3", 00:09:14.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.973 "is_configured": false, 00:09:14.973 "data_offset": 0, 00:09:14.973 "data_size": 0 00:09:14.973 } 00:09:14.973 ] 00:09:14.973 }' 00:09:14.973 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.973 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.541 [2024-11-06 12:40:03.924581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.541 [2024-11-06 12:40:03.924671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.541 [2024-11-06 12:40:03.932555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.541 [2024-11-06 12:40:03.935511] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.541 [2024-11-06 12:40:03.935698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.541 [2024-11-06 12:40:03.935837] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.541 [2024-11-06 12:40:03.935873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.541 "name": "Existed_Raid", 00:09:15.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.541 "strip_size_kb": 64, 00:09:15.541 "state": "configuring", 00:09:15.541 "raid_level": "concat", 00:09:15.541 "superblock": false, 00:09:15.541 "num_base_bdevs": 3, 00:09:15.541 "num_base_bdevs_discovered": 1, 00:09:15.541 "num_base_bdevs_operational": 3, 00:09:15.541 "base_bdevs_list": [ 00:09:15.541 { 00:09:15.541 "name": "BaseBdev1", 00:09:15.541 "uuid": "8fcbc4cb-8100-46b8-b825-5b69d4721b6d", 00:09:15.541 "is_configured": true, 00:09:15.541 "data_offset": 0, 00:09:15.541 "data_size": 65536 00:09:15.541 }, 00:09:15.541 { 00:09:15.541 "name": "BaseBdev2", 00:09:15.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.541 "is_configured": false, 00:09:15.541 "data_offset": 0, 00:09:15.541 "data_size": 0 00:09:15.541 }, 00:09:15.541 { 00:09:15.541 "name": "BaseBdev3", 00:09:15.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.541 "is_configured": false, 00:09:15.541 "data_offset": 0, 00:09:15.541 "data_size": 0 00:09:15.541 } 00:09:15.541 ] 00:09:15.541 }' 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.541 12:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.800 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.800 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.800 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 BaseBdev2 00:09:16.109 [2024-11-06 12:40:04.472513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.109 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.109 [ 00:09:16.109 { 00:09:16.109 "name": "BaseBdev2", 00:09:16.109 "aliases": [ 00:09:16.109 "47988a33-e7da-42a5-9aff-05d7d63938ef" 00:09:16.109 ], 00:09:16.109 "product_name": "Malloc disk", 00:09:16.109 "block_size": 512, 00:09:16.109 "num_blocks": 65536, 00:09:16.109 "uuid": "47988a33-e7da-42a5-9aff-05d7d63938ef", 00:09:16.109 "assigned_rate_limits": { 00:09:16.109 "rw_ios_per_sec": 0, 00:09:16.109 "rw_mbytes_per_sec": 0, 00:09:16.110 "r_mbytes_per_sec": 0, 00:09:16.110 "w_mbytes_per_sec": 0 00:09:16.110 }, 00:09:16.110 "claimed": true, 00:09:16.110 "claim_type": "exclusive_write", 00:09:16.110 "zoned": false, 00:09:16.110 "supported_io_types": { 00:09:16.110 "read": true, 00:09:16.110 "write": true, 00:09:16.110 "unmap": true, 00:09:16.110 "flush": true, 00:09:16.110 "reset": true, 00:09:16.110 "nvme_admin": false, 00:09:16.110 "nvme_io": false, 00:09:16.110 "nvme_io_md": false, 00:09:16.110 "write_zeroes": true, 00:09:16.110 "zcopy": true, 00:09:16.110 "get_zone_info": false, 00:09:16.110 "zone_management": false, 00:09:16.110 "zone_append": false, 00:09:16.110 "compare": false, 00:09:16.110 "compare_and_write": false, 00:09:16.110 "abort": true, 00:09:16.110 "seek_hole": false, 00:09:16.110 "seek_data": false, 00:09:16.110 "copy": true, 00:09:16.110 "nvme_iov_md": false 00:09:16.110 }, 00:09:16.110 "memory_domains": [ 00:09:16.110 { 00:09:16.110 "dma_device_id": "system", 00:09:16.110 "dma_device_type": 1 00:09:16.110 }, 00:09:16.110 { 00:09:16.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.110 "dma_device_type": 2 00:09:16.110 } 00:09:16.110 ], 00:09:16.110 "driver_specific": {} 00:09:16.110 } 00:09:16.110 ] 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.110 "name": "Existed_Raid", 00:09:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.110 "strip_size_kb": 64, 00:09:16.110 "state": "configuring", 00:09:16.110 "raid_level": "concat", 00:09:16.110 "superblock": false, 00:09:16.110 "num_base_bdevs": 3, 00:09:16.110 "num_base_bdevs_discovered": 2, 00:09:16.110 "num_base_bdevs_operational": 3, 00:09:16.110 "base_bdevs_list": [ 00:09:16.110 { 00:09:16.110 "name": "BaseBdev1", 00:09:16.110 "uuid": "8fcbc4cb-8100-46b8-b825-5b69d4721b6d", 00:09:16.110 "is_configured": true, 00:09:16.110 "data_offset": 0, 00:09:16.110 "data_size": 65536 00:09:16.110 }, 00:09:16.110 { 00:09:16.110 "name": "BaseBdev2", 00:09:16.110 "uuid": "47988a33-e7da-42a5-9aff-05d7d63938ef", 00:09:16.110 "is_configured": true, 00:09:16.110 "data_offset": 0, 00:09:16.110 "data_size": 65536 00:09:16.110 }, 00:09:16.110 { 00:09:16.110 "name": "BaseBdev3", 00:09:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.110 "is_configured": false, 00:09:16.110 "data_offset": 0, 00:09:16.110 "data_size": 0 00:09:16.110 } 00:09:16.110 ] 00:09:16.110 }' 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.110 12:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.676 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.676 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.676 [2024-11-06 12:40:05.082961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.677 [2024-11-06 12:40:05.083404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:16.677 [2024-11-06 12:40:05.083442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:16.677 [2024-11-06 12:40:05.083828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.677 [2024-11-06 12:40:05.084081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:16.677 [2024-11-06 12:40:05.084100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:16.677 [2024-11-06 12:40:05.084470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.677 BaseBdev3 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.677 [ 00:09:16.677 { 00:09:16.677 "name": "BaseBdev3", 00:09:16.677 "aliases": [ 00:09:16.677 "d9f42fb2-60c6-4ffd-b151-5fc6b76e4a62" 00:09:16.677 ], 00:09:16.677 "product_name": "Malloc disk", 00:09:16.677 "block_size": 512, 00:09:16.677 "num_blocks": 65536, 00:09:16.677 "uuid": "d9f42fb2-60c6-4ffd-b151-5fc6b76e4a62", 00:09:16.677 "assigned_rate_limits": { 00:09:16.677 "rw_ios_per_sec": 0, 00:09:16.677 "rw_mbytes_per_sec": 0, 00:09:16.677 "r_mbytes_per_sec": 0, 00:09:16.677 "w_mbytes_per_sec": 0 00:09:16.677 }, 00:09:16.677 "claimed": true, 00:09:16.677 "claim_type": "exclusive_write", 00:09:16.677 "zoned": false, 00:09:16.677 "supported_io_types": { 00:09:16.677 "read": true, 00:09:16.677 "write": true, 00:09:16.677 "unmap": true, 00:09:16.677 "flush": true, 00:09:16.677 "reset": true, 00:09:16.677 "nvme_admin": false, 00:09:16.677 "nvme_io": false, 00:09:16.677 "nvme_io_md": false, 00:09:16.677 "write_zeroes": true, 00:09:16.677 "zcopy": true, 00:09:16.677 "get_zone_info": false, 00:09:16.677 "zone_management": false, 00:09:16.677 "zone_append": false, 00:09:16.677 "compare": false, 00:09:16.677 "compare_and_write": false, 00:09:16.677 "abort": true, 00:09:16.677 "seek_hole": false, 00:09:16.677 "seek_data": false, 00:09:16.677 "copy": true, 00:09:16.677 "nvme_iov_md": false 00:09:16.677 }, 00:09:16.677 "memory_domains": [ 00:09:16.677 { 00:09:16.677 "dma_device_id": "system", 00:09:16.677 "dma_device_type": 1 00:09:16.677 }, 00:09:16.677 { 00:09:16.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.677 "dma_device_type": 2 00:09:16.677 } 00:09:16.677 ], 00:09:16.677 "driver_specific": {} 00:09:16.677 } 00:09:16.677 ] 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.677 "name": "Existed_Raid", 00:09:16.677 "uuid": "6d63de96-f631-4385-a94f-3cffe31d5d4a", 00:09:16.677 "strip_size_kb": 64, 00:09:16.677 "state": "online", 00:09:16.677 "raid_level": "concat", 00:09:16.677 "superblock": false, 00:09:16.677 "num_base_bdevs": 3, 00:09:16.677 "num_base_bdevs_discovered": 3, 00:09:16.677 "num_base_bdevs_operational": 3, 00:09:16.677 "base_bdevs_list": [ 00:09:16.677 { 00:09:16.677 "name": "BaseBdev1", 00:09:16.677 "uuid": "8fcbc4cb-8100-46b8-b825-5b69d4721b6d", 00:09:16.677 "is_configured": true, 00:09:16.677 "data_offset": 0, 00:09:16.677 "data_size": 65536 00:09:16.677 }, 00:09:16.677 { 00:09:16.677 "name": "BaseBdev2", 00:09:16.677 "uuid": "47988a33-e7da-42a5-9aff-05d7d63938ef", 00:09:16.677 "is_configured": true, 00:09:16.677 "data_offset": 0, 00:09:16.677 "data_size": 65536 00:09:16.677 }, 00:09:16.677 { 00:09:16.677 "name": "BaseBdev3", 00:09:16.677 "uuid": "d9f42fb2-60c6-4ffd-b151-5fc6b76e4a62", 00:09:16.677 "is_configured": true, 00:09:16.677 "data_offset": 0, 00:09:16.677 "data_size": 65536 00:09:16.677 } 00:09:16.677 ] 00:09:16.677 }' 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.677 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.245 [2024-11-06 12:40:05.643620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.245 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.245 "name": "Existed_Raid", 00:09:17.245 "aliases": [ 00:09:17.245 "6d63de96-f631-4385-a94f-3cffe31d5d4a" 00:09:17.245 ], 00:09:17.245 "product_name": "Raid Volume", 00:09:17.245 "block_size": 512, 00:09:17.245 "num_blocks": 196608, 00:09:17.245 "uuid": "6d63de96-f631-4385-a94f-3cffe31d5d4a", 00:09:17.245 "assigned_rate_limits": { 00:09:17.245 "rw_ios_per_sec": 0, 00:09:17.245 "rw_mbytes_per_sec": 0, 00:09:17.245 "r_mbytes_per_sec": 0, 00:09:17.245 "w_mbytes_per_sec": 0 00:09:17.245 }, 00:09:17.245 "claimed": false, 00:09:17.245 "zoned": false, 00:09:17.245 "supported_io_types": { 00:09:17.245 "read": true, 00:09:17.245 "write": true, 00:09:17.245 "unmap": true, 00:09:17.245 "flush": true, 00:09:17.245 "reset": true, 00:09:17.245 "nvme_admin": false, 00:09:17.245 "nvme_io": false, 00:09:17.245 "nvme_io_md": false, 00:09:17.245 "write_zeroes": true, 00:09:17.245 "zcopy": false, 00:09:17.245 "get_zone_info": false, 00:09:17.245 "zone_management": false, 00:09:17.245 "zone_append": false, 00:09:17.245 "compare": false, 00:09:17.245 "compare_and_write": false, 00:09:17.245 "abort": false, 00:09:17.245 "seek_hole": false, 00:09:17.245 "seek_data": false, 00:09:17.245 "copy": false, 00:09:17.245 "nvme_iov_md": false 00:09:17.245 }, 00:09:17.245 "memory_domains": [ 00:09:17.245 { 00:09:17.245 "dma_device_id": "system", 00:09:17.245 "dma_device_type": 1 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.245 "dma_device_type": 2 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "dma_device_id": "system", 00:09:17.245 "dma_device_type": 1 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.245 "dma_device_type": 2 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "dma_device_id": "system", 00:09:17.245 "dma_device_type": 1 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.245 "dma_device_type": 2 00:09:17.245 } 00:09:17.245 ], 00:09:17.245 "driver_specific": { 00:09:17.245 "raid": { 00:09:17.245 "uuid": "6d63de96-f631-4385-a94f-3cffe31d5d4a", 00:09:17.245 "strip_size_kb": 64, 00:09:17.245 "state": "online", 00:09:17.245 "raid_level": "concat", 00:09:17.245 "superblock": false, 00:09:17.245 "num_base_bdevs": 3, 00:09:17.245 "num_base_bdevs_discovered": 3, 00:09:17.245 "num_base_bdevs_operational": 3, 00:09:17.245 "base_bdevs_list": [ 00:09:17.245 { 00:09:17.245 "name": "BaseBdev1", 00:09:17.245 "uuid": "8fcbc4cb-8100-46b8-b825-5b69d4721b6d", 00:09:17.245 "is_configured": true, 00:09:17.245 "data_offset": 0, 00:09:17.245 "data_size": 65536 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "name": "BaseBdev2", 00:09:17.245 "uuid": "47988a33-e7da-42a5-9aff-05d7d63938ef", 00:09:17.245 "is_configured": true, 00:09:17.245 "data_offset": 0, 00:09:17.245 "data_size": 65536 00:09:17.245 }, 00:09:17.245 { 00:09:17.245 "name": "BaseBdev3", 00:09:17.246 "uuid": "d9f42fb2-60c6-4ffd-b151-5fc6b76e4a62", 00:09:17.246 "is_configured": true, 00:09:17.246 "data_offset": 0, 00:09:17.246 "data_size": 65536 00:09:17.246 } 00:09:17.246 ] 00:09:17.246 } 00:09:17.246 } 00:09:17.246 }' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:17.246 BaseBdev2 00:09:17.246 BaseBdev3' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.246 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.505 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.505 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.505 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.505 12:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.505 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.505 12:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.505 [2024-11-06 12:40:05.943322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.505 [2024-11-06 12:40:05.943504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.505 [2024-11-06 12:40:05.943759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.505 "name": "Existed_Raid", 00:09:17.505 "uuid": "6d63de96-f631-4385-a94f-3cffe31d5d4a", 00:09:17.505 "strip_size_kb": 64, 00:09:17.505 "state": "offline", 00:09:17.505 "raid_level": "concat", 00:09:17.505 "superblock": false, 00:09:17.505 "num_base_bdevs": 3, 00:09:17.505 "num_base_bdevs_discovered": 2, 00:09:17.505 "num_base_bdevs_operational": 2, 00:09:17.505 "base_bdevs_list": [ 00:09:17.505 { 00:09:17.505 "name": null, 00:09:17.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.505 "is_configured": false, 00:09:17.505 "data_offset": 0, 00:09:17.505 "data_size": 65536 00:09:17.505 }, 00:09:17.505 { 00:09:17.505 "name": "BaseBdev2", 00:09:17.505 "uuid": "47988a33-e7da-42a5-9aff-05d7d63938ef", 00:09:17.505 "is_configured": true, 00:09:17.505 "data_offset": 0, 00:09:17.505 "data_size": 65536 00:09:17.505 }, 00:09:17.505 { 00:09:17.505 "name": "BaseBdev3", 00:09:17.505 "uuid": "d9f42fb2-60c6-4ffd-b151-5fc6b76e4a62", 00:09:17.505 "is_configured": true, 00:09:17.505 "data_offset": 0, 00:09:17.505 "data_size": 65536 00:09:17.505 } 00:09:17.505 ] 00:09:17.505 }' 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.505 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.072 [2024-11-06 12:40:06.593693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.072 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.331 [2024-11-06 12:40:06.739893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.331 [2024-11-06 12:40:06.740131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.331 BaseBdev2 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.331 [ 00:09:18.331 { 00:09:18.331 "name": "BaseBdev2", 00:09:18.331 "aliases": [ 00:09:18.331 "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e" 00:09:18.331 ], 00:09:18.331 "product_name": "Malloc disk", 00:09:18.331 "block_size": 512, 00:09:18.331 "num_blocks": 65536, 00:09:18.331 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:18.331 "assigned_rate_limits": { 00:09:18.331 "rw_ios_per_sec": 0, 00:09:18.331 "rw_mbytes_per_sec": 0, 00:09:18.331 "r_mbytes_per_sec": 0, 00:09:18.331 "w_mbytes_per_sec": 0 00:09:18.331 }, 00:09:18.331 "claimed": false, 00:09:18.331 "zoned": false, 00:09:18.331 "supported_io_types": { 00:09:18.331 "read": true, 00:09:18.331 "write": true, 00:09:18.331 "unmap": true, 00:09:18.331 "flush": true, 00:09:18.331 "reset": true, 00:09:18.331 "nvme_admin": false, 00:09:18.331 "nvme_io": false, 00:09:18.331 "nvme_io_md": false, 00:09:18.331 "write_zeroes": true, 00:09:18.331 "zcopy": true, 00:09:18.331 "get_zone_info": false, 00:09:18.331 "zone_management": false, 00:09:18.331 "zone_append": false, 00:09:18.331 "compare": false, 00:09:18.331 "compare_and_write": false, 00:09:18.331 "abort": true, 00:09:18.331 "seek_hole": false, 00:09:18.331 "seek_data": false, 00:09:18.331 "copy": true, 00:09:18.331 "nvme_iov_md": false 00:09:18.331 }, 00:09:18.331 "memory_domains": [ 00:09:18.331 { 00:09:18.331 "dma_device_id": "system", 00:09:18.331 "dma_device_type": 1 00:09:18.331 }, 00:09:18.331 { 00:09:18.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.331 "dma_device_type": 2 00:09:18.331 } 00:09:18.331 ], 00:09:18.331 "driver_specific": {} 00:09:18.331 } 00:09:18.331 ] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.331 12:40:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.590 BaseBdev3 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.590 [ 00:09:18.590 { 00:09:18.590 "name": "BaseBdev3", 00:09:18.590 "aliases": [ 00:09:18.590 "a61c1d52-2820-4fae-a289-30a7dcff2f91" 00:09:18.590 ], 00:09:18.590 "product_name": "Malloc disk", 00:09:18.590 "block_size": 512, 00:09:18.590 "num_blocks": 65536, 00:09:18.590 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:18.590 "assigned_rate_limits": { 00:09:18.590 "rw_ios_per_sec": 0, 00:09:18.590 "rw_mbytes_per_sec": 0, 00:09:18.590 "r_mbytes_per_sec": 0, 00:09:18.590 "w_mbytes_per_sec": 0 00:09:18.590 }, 00:09:18.590 "claimed": false, 00:09:18.590 "zoned": false, 00:09:18.590 "supported_io_types": { 00:09:18.590 "read": true, 00:09:18.590 "write": true, 00:09:18.590 "unmap": true, 00:09:18.590 "flush": true, 00:09:18.590 "reset": true, 00:09:18.590 "nvme_admin": false, 00:09:18.590 "nvme_io": false, 00:09:18.590 "nvme_io_md": false, 00:09:18.590 "write_zeroes": true, 00:09:18.590 "zcopy": true, 00:09:18.590 "get_zone_info": false, 00:09:18.590 "zone_management": false, 00:09:18.590 "zone_append": false, 00:09:18.590 "compare": false, 00:09:18.590 "compare_and_write": false, 00:09:18.590 "abort": true, 00:09:18.590 "seek_hole": false, 00:09:18.590 "seek_data": false, 00:09:18.590 "copy": true, 00:09:18.590 "nvme_iov_md": false 00:09:18.590 }, 00:09:18.590 "memory_domains": [ 00:09:18.590 { 00:09:18.590 "dma_device_id": "system", 00:09:18.590 "dma_device_type": 1 00:09:18.590 }, 00:09:18.590 { 00:09:18.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.590 "dma_device_type": 2 00:09:18.590 } 00:09:18.590 ], 00:09:18.590 "driver_specific": {} 00:09:18.590 } 00:09:18.590 ] 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:18.590 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.591 [2024-11-06 12:40:07.037765] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.591 [2024-11-06 12:40:07.037974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.591 [2024-11-06 12:40:07.038175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.591 [2024-11-06 12:40:07.040992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.591 "name": "Existed_Raid", 00:09:18.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.591 "strip_size_kb": 64, 00:09:18.591 "state": "configuring", 00:09:18.591 "raid_level": "concat", 00:09:18.591 "superblock": false, 00:09:18.591 "num_base_bdevs": 3, 00:09:18.591 "num_base_bdevs_discovered": 2, 00:09:18.591 "num_base_bdevs_operational": 3, 00:09:18.591 "base_bdevs_list": [ 00:09:18.591 { 00:09:18.591 "name": "BaseBdev1", 00:09:18.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.591 "is_configured": false, 00:09:18.591 "data_offset": 0, 00:09:18.591 "data_size": 0 00:09:18.591 }, 00:09:18.591 { 00:09:18.591 "name": "BaseBdev2", 00:09:18.591 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:18.591 "is_configured": true, 00:09:18.591 "data_offset": 0, 00:09:18.591 "data_size": 65536 00:09:18.591 }, 00:09:18.591 { 00:09:18.591 "name": "BaseBdev3", 00:09:18.591 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:18.591 "is_configured": true, 00:09:18.591 "data_offset": 0, 00:09:18.591 "data_size": 65536 00:09:18.591 } 00:09:18.591 ] 00:09:18.591 }' 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.591 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.197 [2024-11-06 12:40:07.545911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.197 "name": "Existed_Raid", 00:09:19.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.197 "strip_size_kb": 64, 00:09:19.197 "state": "configuring", 00:09:19.197 "raid_level": "concat", 00:09:19.197 "superblock": false, 00:09:19.197 "num_base_bdevs": 3, 00:09:19.197 "num_base_bdevs_discovered": 1, 00:09:19.197 "num_base_bdevs_operational": 3, 00:09:19.197 "base_bdevs_list": [ 00:09:19.197 { 00:09:19.197 "name": "BaseBdev1", 00:09:19.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.197 "is_configured": false, 00:09:19.197 "data_offset": 0, 00:09:19.197 "data_size": 0 00:09:19.197 }, 00:09:19.197 { 00:09:19.197 "name": null, 00:09:19.197 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:19.197 "is_configured": false, 00:09:19.197 "data_offset": 0, 00:09:19.197 "data_size": 65536 00:09:19.197 }, 00:09:19.197 { 00:09:19.197 "name": "BaseBdev3", 00:09:19.197 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:19.197 "is_configured": true, 00:09:19.197 "data_offset": 0, 00:09:19.197 "data_size": 65536 00:09:19.197 } 00:09:19.197 ] 00:09:19.197 }' 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.197 12:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.456 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.456 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:19.456 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.456 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.456 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.715 [2024-11-06 12:40:08.172658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.715 BaseBdev1 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.715 [ 00:09:19.715 { 00:09:19.715 "name": "BaseBdev1", 00:09:19.715 "aliases": [ 00:09:19.715 "3c732e42-b57c-4264-a350-5fc0f1903137" 00:09:19.715 ], 00:09:19.715 "product_name": "Malloc disk", 00:09:19.715 "block_size": 512, 00:09:19.715 "num_blocks": 65536, 00:09:19.715 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:19.715 "assigned_rate_limits": { 00:09:19.715 "rw_ios_per_sec": 0, 00:09:19.715 "rw_mbytes_per_sec": 0, 00:09:19.715 "r_mbytes_per_sec": 0, 00:09:19.715 "w_mbytes_per_sec": 0 00:09:19.715 }, 00:09:19.715 "claimed": true, 00:09:19.715 "claim_type": "exclusive_write", 00:09:19.715 "zoned": false, 00:09:19.715 "supported_io_types": { 00:09:19.715 "read": true, 00:09:19.715 "write": true, 00:09:19.715 "unmap": true, 00:09:19.715 "flush": true, 00:09:19.715 "reset": true, 00:09:19.715 "nvme_admin": false, 00:09:19.715 "nvme_io": false, 00:09:19.715 "nvme_io_md": false, 00:09:19.715 "write_zeroes": true, 00:09:19.715 "zcopy": true, 00:09:19.715 "get_zone_info": false, 00:09:19.715 "zone_management": false, 00:09:19.715 "zone_append": false, 00:09:19.715 "compare": false, 00:09:19.715 "compare_and_write": false, 00:09:19.715 "abort": true, 00:09:19.715 "seek_hole": false, 00:09:19.715 "seek_data": false, 00:09:19.715 "copy": true, 00:09:19.715 "nvme_iov_md": false 00:09:19.715 }, 00:09:19.715 "memory_domains": [ 00:09:19.715 { 00:09:19.715 "dma_device_id": "system", 00:09:19.715 "dma_device_type": 1 00:09:19.715 }, 00:09:19.715 { 00:09:19.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.715 "dma_device_type": 2 00:09:19.715 } 00:09:19.715 ], 00:09:19.715 "driver_specific": {} 00:09:19.715 } 00:09:19.715 ] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.715 "name": "Existed_Raid", 00:09:19.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.715 "strip_size_kb": 64, 00:09:19.715 "state": "configuring", 00:09:19.715 "raid_level": "concat", 00:09:19.715 "superblock": false, 00:09:19.715 "num_base_bdevs": 3, 00:09:19.715 "num_base_bdevs_discovered": 2, 00:09:19.715 "num_base_bdevs_operational": 3, 00:09:19.715 "base_bdevs_list": [ 00:09:19.715 { 00:09:19.715 "name": "BaseBdev1", 00:09:19.715 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:19.715 "is_configured": true, 00:09:19.715 "data_offset": 0, 00:09:19.715 "data_size": 65536 00:09:19.715 }, 00:09:19.715 { 00:09:19.715 "name": null, 00:09:19.715 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:19.715 "is_configured": false, 00:09:19.715 "data_offset": 0, 00:09:19.715 "data_size": 65536 00:09:19.715 }, 00:09:19.715 { 00:09:19.715 "name": "BaseBdev3", 00:09:19.715 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:19.715 "is_configured": true, 00:09:19.715 "data_offset": 0, 00:09:19.715 "data_size": 65536 00:09:19.715 } 00:09:19.715 ] 00:09:19.715 }' 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.715 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.281 [2024-11-06 12:40:08.684892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.281 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.281 "name": "Existed_Raid", 00:09:20.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.281 "strip_size_kb": 64, 00:09:20.281 "state": "configuring", 00:09:20.281 "raid_level": "concat", 00:09:20.281 "superblock": false, 00:09:20.281 "num_base_bdevs": 3, 00:09:20.281 "num_base_bdevs_discovered": 1, 00:09:20.281 "num_base_bdevs_operational": 3, 00:09:20.281 "base_bdevs_list": [ 00:09:20.281 { 00:09:20.281 "name": "BaseBdev1", 00:09:20.281 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:20.281 "is_configured": true, 00:09:20.281 "data_offset": 0, 00:09:20.281 "data_size": 65536 00:09:20.281 }, 00:09:20.281 { 00:09:20.281 "name": null, 00:09:20.281 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:20.281 "is_configured": false, 00:09:20.281 "data_offset": 0, 00:09:20.282 "data_size": 65536 00:09:20.282 }, 00:09:20.282 { 00:09:20.282 "name": null, 00:09:20.282 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:20.282 "is_configured": false, 00:09:20.282 "data_offset": 0, 00:09:20.282 "data_size": 65536 00:09:20.282 } 00:09:20.282 ] 00:09:20.282 }' 00:09:20.282 12:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.282 12:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 [2024-11-06 12:40:09.257104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.849 "name": "Existed_Raid", 00:09:20.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.849 "strip_size_kb": 64, 00:09:20.849 "state": "configuring", 00:09:20.849 "raid_level": "concat", 00:09:20.849 "superblock": false, 00:09:20.849 "num_base_bdevs": 3, 00:09:20.849 "num_base_bdevs_discovered": 2, 00:09:20.849 "num_base_bdevs_operational": 3, 00:09:20.849 "base_bdevs_list": [ 00:09:20.849 { 00:09:20.849 "name": "BaseBdev1", 00:09:20.849 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:20.849 "is_configured": true, 00:09:20.849 "data_offset": 0, 00:09:20.849 "data_size": 65536 00:09:20.849 }, 00:09:20.849 { 00:09:20.849 "name": null, 00:09:20.849 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:20.849 "is_configured": false, 00:09:20.849 "data_offset": 0, 00:09:20.849 "data_size": 65536 00:09:20.849 }, 00:09:20.849 { 00:09:20.849 "name": "BaseBdev3", 00:09:20.849 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:20.849 "is_configured": true, 00:09:20.849 "data_offset": 0, 00:09:20.849 "data_size": 65536 00:09:20.849 } 00:09:20.849 ] 00:09:20.849 }' 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.849 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.109 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.110 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.110 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:21.110 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.110 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.371 [2024-11-06 12:40:09.773261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.371 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.372 "name": "Existed_Raid", 00:09:21.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.372 "strip_size_kb": 64, 00:09:21.372 "state": "configuring", 00:09:21.372 "raid_level": "concat", 00:09:21.372 "superblock": false, 00:09:21.372 "num_base_bdevs": 3, 00:09:21.372 "num_base_bdevs_discovered": 1, 00:09:21.372 "num_base_bdevs_operational": 3, 00:09:21.372 "base_bdevs_list": [ 00:09:21.372 { 00:09:21.372 "name": null, 00:09:21.372 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:21.372 "is_configured": false, 00:09:21.372 "data_offset": 0, 00:09:21.372 "data_size": 65536 00:09:21.372 }, 00:09:21.372 { 00:09:21.372 "name": null, 00:09:21.372 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:21.372 "is_configured": false, 00:09:21.372 "data_offset": 0, 00:09:21.372 "data_size": 65536 00:09:21.372 }, 00:09:21.372 { 00:09:21.372 "name": "BaseBdev3", 00:09:21.372 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:21.372 "is_configured": true, 00:09:21.372 "data_offset": 0, 00:09:21.372 "data_size": 65536 00:09:21.372 } 00:09:21.372 ] 00:09:21.372 }' 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.372 12:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.938 [2024-11-06 12:40:10.377966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.938 "name": "Existed_Raid", 00:09:21.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.938 "strip_size_kb": 64, 00:09:21.938 "state": "configuring", 00:09:21.938 "raid_level": "concat", 00:09:21.938 "superblock": false, 00:09:21.938 "num_base_bdevs": 3, 00:09:21.938 "num_base_bdevs_discovered": 2, 00:09:21.938 "num_base_bdevs_operational": 3, 00:09:21.938 "base_bdevs_list": [ 00:09:21.938 { 00:09:21.938 "name": null, 00:09:21.938 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:21.938 "is_configured": false, 00:09:21.938 "data_offset": 0, 00:09:21.938 "data_size": 65536 00:09:21.938 }, 00:09:21.938 { 00:09:21.938 "name": "BaseBdev2", 00:09:21.938 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:21.938 "is_configured": true, 00:09:21.938 "data_offset": 0, 00:09:21.938 "data_size": 65536 00:09:21.938 }, 00:09:21.938 { 00:09:21.938 "name": "BaseBdev3", 00:09:21.938 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:21.938 "is_configured": true, 00:09:21.938 "data_offset": 0, 00:09:21.938 "data_size": 65536 00:09:21.938 } 00:09:21.938 ] 00:09:21.938 }' 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.938 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.503 12:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.503 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3c732e42-b57c-4264-a350-5fc0f1903137 00:09:22.503 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.503 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.503 [2024-11-06 12:40:11.047532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:22.503 [2024-11-06 12:40:11.047611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:22.504 [2024-11-06 12:40:11.047628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:22.504 [2024-11-06 12:40:11.047969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.504 [2024-11-06 12:40:11.048166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:22.504 [2024-11-06 12:40:11.048182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:22.504 [2024-11-06 12:40:11.048553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.504 NewBaseBdev 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.504 [ 00:09:22.504 { 00:09:22.504 "name": "NewBaseBdev", 00:09:22.504 "aliases": [ 00:09:22.504 "3c732e42-b57c-4264-a350-5fc0f1903137" 00:09:22.504 ], 00:09:22.504 "product_name": "Malloc disk", 00:09:22.504 "block_size": 512, 00:09:22.504 "num_blocks": 65536, 00:09:22.504 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:22.504 "assigned_rate_limits": { 00:09:22.504 "rw_ios_per_sec": 0, 00:09:22.504 "rw_mbytes_per_sec": 0, 00:09:22.504 "r_mbytes_per_sec": 0, 00:09:22.504 "w_mbytes_per_sec": 0 00:09:22.504 }, 00:09:22.504 "claimed": true, 00:09:22.504 "claim_type": "exclusive_write", 00:09:22.504 "zoned": false, 00:09:22.504 "supported_io_types": { 00:09:22.504 "read": true, 00:09:22.504 "write": true, 00:09:22.504 "unmap": true, 00:09:22.504 "flush": true, 00:09:22.504 "reset": true, 00:09:22.504 "nvme_admin": false, 00:09:22.504 "nvme_io": false, 00:09:22.504 "nvme_io_md": false, 00:09:22.504 "write_zeroes": true, 00:09:22.504 "zcopy": true, 00:09:22.504 "get_zone_info": false, 00:09:22.504 "zone_management": false, 00:09:22.504 "zone_append": false, 00:09:22.504 "compare": false, 00:09:22.504 "compare_and_write": false, 00:09:22.504 "abort": true, 00:09:22.504 "seek_hole": false, 00:09:22.504 "seek_data": false, 00:09:22.504 "copy": true, 00:09:22.504 "nvme_iov_md": false 00:09:22.504 }, 00:09:22.504 "memory_domains": [ 00:09:22.504 { 00:09:22.504 "dma_device_id": "system", 00:09:22.504 "dma_device_type": 1 00:09:22.504 }, 00:09:22.504 { 00:09:22.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.504 "dma_device_type": 2 00:09:22.504 } 00:09:22.504 ], 00:09:22.504 "driver_specific": {} 00:09:22.504 } 00:09:22.504 ] 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.504 "name": "Existed_Raid", 00:09:22.504 "uuid": "ca4feac1-be3d-4fc3-97be-6018f2a64e59", 00:09:22.504 "strip_size_kb": 64, 00:09:22.504 "state": "online", 00:09:22.504 "raid_level": "concat", 00:09:22.504 "superblock": false, 00:09:22.504 "num_base_bdevs": 3, 00:09:22.504 "num_base_bdevs_discovered": 3, 00:09:22.504 "num_base_bdevs_operational": 3, 00:09:22.504 "base_bdevs_list": [ 00:09:22.504 { 00:09:22.504 "name": "NewBaseBdev", 00:09:22.504 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:22.504 "is_configured": true, 00:09:22.504 "data_offset": 0, 00:09:22.504 "data_size": 65536 00:09:22.504 }, 00:09:22.504 { 00:09:22.504 "name": "BaseBdev2", 00:09:22.504 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:22.504 "is_configured": true, 00:09:22.504 "data_offset": 0, 00:09:22.504 "data_size": 65536 00:09:22.504 }, 00:09:22.504 { 00:09:22.504 "name": "BaseBdev3", 00:09:22.504 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:22.504 "is_configured": true, 00:09:22.504 "data_offset": 0, 00:09:22.504 "data_size": 65536 00:09:22.504 } 00:09:22.504 ] 00:09:22.504 }' 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.504 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.070 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.071 [2024-11-06 12:40:11.584134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.071 "name": "Existed_Raid", 00:09:23.071 "aliases": [ 00:09:23.071 "ca4feac1-be3d-4fc3-97be-6018f2a64e59" 00:09:23.071 ], 00:09:23.071 "product_name": "Raid Volume", 00:09:23.071 "block_size": 512, 00:09:23.071 "num_blocks": 196608, 00:09:23.071 "uuid": "ca4feac1-be3d-4fc3-97be-6018f2a64e59", 00:09:23.071 "assigned_rate_limits": { 00:09:23.071 "rw_ios_per_sec": 0, 00:09:23.071 "rw_mbytes_per_sec": 0, 00:09:23.071 "r_mbytes_per_sec": 0, 00:09:23.071 "w_mbytes_per_sec": 0 00:09:23.071 }, 00:09:23.071 "claimed": false, 00:09:23.071 "zoned": false, 00:09:23.071 "supported_io_types": { 00:09:23.071 "read": true, 00:09:23.071 "write": true, 00:09:23.071 "unmap": true, 00:09:23.071 "flush": true, 00:09:23.071 "reset": true, 00:09:23.071 "nvme_admin": false, 00:09:23.071 "nvme_io": false, 00:09:23.071 "nvme_io_md": false, 00:09:23.071 "write_zeroes": true, 00:09:23.071 "zcopy": false, 00:09:23.071 "get_zone_info": false, 00:09:23.071 "zone_management": false, 00:09:23.071 "zone_append": false, 00:09:23.071 "compare": false, 00:09:23.071 "compare_and_write": false, 00:09:23.071 "abort": false, 00:09:23.071 "seek_hole": false, 00:09:23.071 "seek_data": false, 00:09:23.071 "copy": false, 00:09:23.071 "nvme_iov_md": false 00:09:23.071 }, 00:09:23.071 "memory_domains": [ 00:09:23.071 { 00:09:23.071 "dma_device_id": "system", 00:09:23.071 "dma_device_type": 1 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.071 "dma_device_type": 2 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "dma_device_id": "system", 00:09:23.071 "dma_device_type": 1 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.071 "dma_device_type": 2 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "dma_device_id": "system", 00:09:23.071 "dma_device_type": 1 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.071 "dma_device_type": 2 00:09:23.071 } 00:09:23.071 ], 00:09:23.071 "driver_specific": { 00:09:23.071 "raid": { 00:09:23.071 "uuid": "ca4feac1-be3d-4fc3-97be-6018f2a64e59", 00:09:23.071 "strip_size_kb": 64, 00:09:23.071 "state": "online", 00:09:23.071 "raid_level": "concat", 00:09:23.071 "superblock": false, 00:09:23.071 "num_base_bdevs": 3, 00:09:23.071 "num_base_bdevs_discovered": 3, 00:09:23.071 "num_base_bdevs_operational": 3, 00:09:23.071 "base_bdevs_list": [ 00:09:23.071 { 00:09:23.071 "name": "NewBaseBdev", 00:09:23.071 "uuid": "3c732e42-b57c-4264-a350-5fc0f1903137", 00:09:23.071 "is_configured": true, 00:09:23.071 "data_offset": 0, 00:09:23.071 "data_size": 65536 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "name": "BaseBdev2", 00:09:23.071 "uuid": "8e07e1a1-fa08-4ec9-b944-affff4ca9d0e", 00:09:23.071 "is_configured": true, 00:09:23.071 "data_offset": 0, 00:09:23.071 "data_size": 65536 00:09:23.071 }, 00:09:23.071 { 00:09:23.071 "name": "BaseBdev3", 00:09:23.071 "uuid": "a61c1d52-2820-4fae-a289-30a7dcff2f91", 00:09:23.071 "is_configured": true, 00:09:23.071 "data_offset": 0, 00:09:23.071 "data_size": 65536 00:09:23.071 } 00:09:23.071 ] 00:09:23.071 } 00:09:23.071 } 00:09:23.071 }' 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:23.071 BaseBdev2 00:09:23.071 BaseBdev3' 00:09:23.071 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.330 [2024-11-06 12:40:11.899830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.330 [2024-11-06 12:40:11.900004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.330 [2024-11-06 12:40:11.900275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.330 [2024-11-06 12:40:11.900471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.330 [2024-11-06 12:40:11.900508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65638 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65638 ']' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65638 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65638 00:09:23.330 killing process with pid 65638 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65638' 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65638 00:09:23.330 [2024-11-06 12:40:11.942465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.330 12:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65638 00:09:23.589 [2024-11-06 12:40:12.236383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:24.964 00:09:24.964 real 0m11.683s 00:09:24.964 user 0m19.027s 00:09:24.964 sys 0m1.749s 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 ************************************ 00:09:24.964 END TEST raid_state_function_test 00:09:24.964 ************************************ 00:09:24.964 12:40:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:24.964 12:40:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:24.964 12:40:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.964 12:40:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 ************************************ 00:09:24.964 START TEST raid_state_function_test_sb 00:09:24.964 ************************************ 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66276 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66276' 00:09:24.964 Process raid pid: 66276 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66276 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66276 ']' 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:24.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:24.964 12:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.964 [2024-11-06 12:40:13.539531] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:24.964 [2024-11-06 12:40:13.539735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.223 [2024-11-06 12:40:13.732691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.482 [2024-11-06 12:40:13.930890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.740 [2024-11-06 12:40:14.207057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.740 [2024-11-06 12:40:14.207129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.999 [2024-11-06 12:40:14.562770] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.999 [2024-11-06 12:40:14.562853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.999 [2024-11-06 12:40:14.562872] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.999 [2024-11-06 12:40:14.562889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.999 [2024-11-06 12:40:14.562900] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.999 [2024-11-06 12:40:14.562915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.999 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.999 "name": "Existed_Raid", 00:09:25.999 "uuid": "aea28849-872d-46c8-abfd-46b588d3d7b3", 00:09:26.000 "strip_size_kb": 64, 00:09:26.000 "state": "configuring", 00:09:26.000 "raid_level": "concat", 00:09:26.000 "superblock": true, 00:09:26.000 "num_base_bdevs": 3, 00:09:26.000 "num_base_bdevs_discovered": 0, 00:09:26.000 "num_base_bdevs_operational": 3, 00:09:26.000 "base_bdevs_list": [ 00:09:26.000 { 00:09:26.000 "name": "BaseBdev1", 00:09:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.000 "is_configured": false, 00:09:26.000 "data_offset": 0, 00:09:26.000 "data_size": 0 00:09:26.000 }, 00:09:26.000 { 00:09:26.000 "name": "BaseBdev2", 00:09:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.000 "is_configured": false, 00:09:26.000 "data_offset": 0, 00:09:26.000 "data_size": 0 00:09:26.000 }, 00:09:26.000 { 00:09:26.000 "name": "BaseBdev3", 00:09:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.000 "is_configured": false, 00:09:26.000 "data_offset": 0, 00:09:26.000 "data_size": 0 00:09:26.000 } 00:09:26.000 ] 00:09:26.000 }' 00:09:26.000 12:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.000 12:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 [2024-11-06 12:40:15.058797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.567 [2024-11-06 12:40:15.058863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 [2024-11-06 12:40:15.066758] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.567 [2024-11-06 12:40:15.066817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.567 [2024-11-06 12:40:15.066833] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.567 [2024-11-06 12:40:15.066849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.567 [2024-11-06 12:40:15.066859] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.567 [2024-11-06 12:40:15.066874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 [2024-11-06 12:40:15.114870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.567 BaseBdev1 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 [ 00:09:26.567 { 00:09:26.567 "name": "BaseBdev1", 00:09:26.567 "aliases": [ 00:09:26.567 "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f" 00:09:26.567 ], 00:09:26.567 "product_name": "Malloc disk", 00:09:26.567 "block_size": 512, 00:09:26.567 "num_blocks": 65536, 00:09:26.567 "uuid": "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f", 00:09:26.567 "assigned_rate_limits": { 00:09:26.567 "rw_ios_per_sec": 0, 00:09:26.567 "rw_mbytes_per_sec": 0, 00:09:26.567 "r_mbytes_per_sec": 0, 00:09:26.567 "w_mbytes_per_sec": 0 00:09:26.567 }, 00:09:26.567 "claimed": true, 00:09:26.567 "claim_type": "exclusive_write", 00:09:26.567 "zoned": false, 00:09:26.567 "supported_io_types": { 00:09:26.567 "read": true, 00:09:26.567 "write": true, 00:09:26.567 "unmap": true, 00:09:26.567 "flush": true, 00:09:26.567 "reset": true, 00:09:26.567 "nvme_admin": false, 00:09:26.567 "nvme_io": false, 00:09:26.567 "nvme_io_md": false, 00:09:26.567 "write_zeroes": true, 00:09:26.567 "zcopy": true, 00:09:26.567 "get_zone_info": false, 00:09:26.567 "zone_management": false, 00:09:26.567 "zone_append": false, 00:09:26.567 "compare": false, 00:09:26.567 "compare_and_write": false, 00:09:26.567 "abort": true, 00:09:26.567 "seek_hole": false, 00:09:26.567 "seek_data": false, 00:09:26.567 "copy": true, 00:09:26.567 "nvme_iov_md": false 00:09:26.567 }, 00:09:26.567 "memory_domains": [ 00:09:26.567 { 00:09:26.567 "dma_device_id": "system", 00:09:26.567 "dma_device_type": 1 00:09:26.567 }, 00:09:26.567 { 00:09:26.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.567 "dma_device_type": 2 00:09:26.567 } 00:09:26.567 ], 00:09:26.567 "driver_specific": {} 00:09:26.567 } 00:09:26.567 ] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.567 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.567 "name": "Existed_Raid", 00:09:26.567 "uuid": "2102b701-422d-4e7c-9460-3d3a74f69d92", 00:09:26.567 "strip_size_kb": 64, 00:09:26.567 "state": "configuring", 00:09:26.567 "raid_level": "concat", 00:09:26.567 "superblock": true, 00:09:26.567 "num_base_bdevs": 3, 00:09:26.567 "num_base_bdevs_discovered": 1, 00:09:26.567 "num_base_bdevs_operational": 3, 00:09:26.567 "base_bdevs_list": [ 00:09:26.567 { 00:09:26.567 "name": "BaseBdev1", 00:09:26.567 "uuid": "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f", 00:09:26.567 "is_configured": true, 00:09:26.568 "data_offset": 2048, 00:09:26.568 "data_size": 63488 00:09:26.568 }, 00:09:26.568 { 00:09:26.568 "name": "BaseBdev2", 00:09:26.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.568 "is_configured": false, 00:09:26.568 "data_offset": 0, 00:09:26.568 "data_size": 0 00:09:26.568 }, 00:09:26.568 { 00:09:26.568 "name": "BaseBdev3", 00:09:26.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.568 "is_configured": false, 00:09:26.568 "data_offset": 0, 00:09:26.568 "data_size": 0 00:09:26.568 } 00:09:26.568 ] 00:09:26.568 }' 00:09:26.568 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.568 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.138 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.138 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.138 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.138 [2024-11-06 12:40:15.655092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.138 [2024-11-06 12:40:15.655186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:27.138 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.139 [2024-11-06 12:40:15.663152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.139 [2024-11-06 12:40:15.665776] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.139 [2024-11-06 12:40:15.665836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.139 [2024-11-06 12:40:15.665853] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.139 [2024-11-06 12:40:15.665870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.139 "name": "Existed_Raid", 00:09:27.139 "uuid": "a535f941-0f58-45e2-8ac3-8605e1b127b1", 00:09:27.139 "strip_size_kb": 64, 00:09:27.139 "state": "configuring", 00:09:27.139 "raid_level": "concat", 00:09:27.139 "superblock": true, 00:09:27.139 "num_base_bdevs": 3, 00:09:27.139 "num_base_bdevs_discovered": 1, 00:09:27.139 "num_base_bdevs_operational": 3, 00:09:27.139 "base_bdevs_list": [ 00:09:27.139 { 00:09:27.139 "name": "BaseBdev1", 00:09:27.139 "uuid": "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f", 00:09:27.139 "is_configured": true, 00:09:27.139 "data_offset": 2048, 00:09:27.139 "data_size": 63488 00:09:27.139 }, 00:09:27.139 { 00:09:27.139 "name": "BaseBdev2", 00:09:27.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.139 "is_configured": false, 00:09:27.139 "data_offset": 0, 00:09:27.139 "data_size": 0 00:09:27.139 }, 00:09:27.139 { 00:09:27.139 "name": "BaseBdev3", 00:09:27.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.139 "is_configured": false, 00:09:27.139 "data_offset": 0, 00:09:27.139 "data_size": 0 00:09:27.139 } 00:09:27.139 ] 00:09:27.139 }' 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.139 12:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 [2024-11-06 12:40:16.212445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.706 BaseBdev2 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.706 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.706 [ 00:09:27.706 { 00:09:27.706 "name": "BaseBdev2", 00:09:27.706 "aliases": [ 00:09:27.706 "24d182ac-8e27-4444-b7e2-8a2dd3e6d856" 00:09:27.706 ], 00:09:27.706 "product_name": "Malloc disk", 00:09:27.706 "block_size": 512, 00:09:27.706 "num_blocks": 65536, 00:09:27.707 "uuid": "24d182ac-8e27-4444-b7e2-8a2dd3e6d856", 00:09:27.707 "assigned_rate_limits": { 00:09:27.707 "rw_ios_per_sec": 0, 00:09:27.707 "rw_mbytes_per_sec": 0, 00:09:27.707 "r_mbytes_per_sec": 0, 00:09:27.707 "w_mbytes_per_sec": 0 00:09:27.707 }, 00:09:27.707 "claimed": true, 00:09:27.707 "claim_type": "exclusive_write", 00:09:27.707 "zoned": false, 00:09:27.707 "supported_io_types": { 00:09:27.707 "read": true, 00:09:27.707 "write": true, 00:09:27.707 "unmap": true, 00:09:27.707 "flush": true, 00:09:27.707 "reset": true, 00:09:27.707 "nvme_admin": false, 00:09:27.707 "nvme_io": false, 00:09:27.707 "nvme_io_md": false, 00:09:27.707 "write_zeroes": true, 00:09:27.707 "zcopy": true, 00:09:27.707 "get_zone_info": false, 00:09:27.707 "zone_management": false, 00:09:27.707 "zone_append": false, 00:09:27.707 "compare": false, 00:09:27.707 "compare_and_write": false, 00:09:27.707 "abort": true, 00:09:27.707 "seek_hole": false, 00:09:27.707 "seek_data": false, 00:09:27.707 "copy": true, 00:09:27.707 "nvme_iov_md": false 00:09:27.707 }, 00:09:27.707 "memory_domains": [ 00:09:27.707 { 00:09:27.707 "dma_device_id": "system", 00:09:27.707 "dma_device_type": 1 00:09:27.707 }, 00:09:27.707 { 00:09:27.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.707 "dma_device_type": 2 00:09:27.707 } 00:09:27.707 ], 00:09:27.707 "driver_specific": {} 00:09:27.707 } 00:09:27.707 ] 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.707 "name": "Existed_Raid", 00:09:27.707 "uuid": "a535f941-0f58-45e2-8ac3-8605e1b127b1", 00:09:27.707 "strip_size_kb": 64, 00:09:27.707 "state": "configuring", 00:09:27.707 "raid_level": "concat", 00:09:27.707 "superblock": true, 00:09:27.707 "num_base_bdevs": 3, 00:09:27.707 "num_base_bdevs_discovered": 2, 00:09:27.707 "num_base_bdevs_operational": 3, 00:09:27.707 "base_bdevs_list": [ 00:09:27.707 { 00:09:27.707 "name": "BaseBdev1", 00:09:27.707 "uuid": "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f", 00:09:27.707 "is_configured": true, 00:09:27.707 "data_offset": 2048, 00:09:27.707 "data_size": 63488 00:09:27.707 }, 00:09:27.707 { 00:09:27.707 "name": "BaseBdev2", 00:09:27.707 "uuid": "24d182ac-8e27-4444-b7e2-8a2dd3e6d856", 00:09:27.707 "is_configured": true, 00:09:27.707 "data_offset": 2048, 00:09:27.707 "data_size": 63488 00:09:27.707 }, 00:09:27.707 { 00:09:27.707 "name": "BaseBdev3", 00:09:27.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.707 "is_configured": false, 00:09:27.707 "data_offset": 0, 00:09:27.707 "data_size": 0 00:09:27.707 } 00:09:27.707 ] 00:09:27.707 }' 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.707 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 [2024-11-06 12:40:16.805960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.274 [2024-11-06 12:40:16.806343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.274 [2024-11-06 12:40:16.806377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:28.274 BaseBdev3 00:09:28.274 [2024-11-06 12:40:16.806727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:28.274 [2024-11-06 12:40:16.806946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.274 [2024-11-06 12:40:16.806964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:28.274 [2024-11-06 12:40:16.807151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 [ 00:09:28.274 { 00:09:28.274 "name": "BaseBdev3", 00:09:28.274 "aliases": [ 00:09:28.274 "b3857821-697e-4139-b3b9-443f74b5d52b" 00:09:28.274 ], 00:09:28.274 "product_name": "Malloc disk", 00:09:28.274 "block_size": 512, 00:09:28.274 "num_blocks": 65536, 00:09:28.274 "uuid": "b3857821-697e-4139-b3b9-443f74b5d52b", 00:09:28.274 "assigned_rate_limits": { 00:09:28.274 "rw_ios_per_sec": 0, 00:09:28.274 "rw_mbytes_per_sec": 0, 00:09:28.274 "r_mbytes_per_sec": 0, 00:09:28.274 "w_mbytes_per_sec": 0 00:09:28.274 }, 00:09:28.274 "claimed": true, 00:09:28.274 "claim_type": "exclusive_write", 00:09:28.274 "zoned": false, 00:09:28.274 "supported_io_types": { 00:09:28.274 "read": true, 00:09:28.274 "write": true, 00:09:28.274 "unmap": true, 00:09:28.274 "flush": true, 00:09:28.274 "reset": true, 00:09:28.274 "nvme_admin": false, 00:09:28.274 "nvme_io": false, 00:09:28.274 "nvme_io_md": false, 00:09:28.274 "write_zeroes": true, 00:09:28.274 "zcopy": true, 00:09:28.274 "get_zone_info": false, 00:09:28.274 "zone_management": false, 00:09:28.274 "zone_append": false, 00:09:28.274 "compare": false, 00:09:28.274 "compare_and_write": false, 00:09:28.274 "abort": true, 00:09:28.274 "seek_hole": false, 00:09:28.274 "seek_data": false, 00:09:28.274 "copy": true, 00:09:28.274 "nvme_iov_md": false 00:09:28.274 }, 00:09:28.274 "memory_domains": [ 00:09:28.274 { 00:09:28.274 "dma_device_id": "system", 00:09:28.274 "dma_device_type": 1 00:09:28.274 }, 00:09:28.274 { 00:09:28.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.274 "dma_device_type": 2 00:09:28.274 } 00:09:28.274 ], 00:09:28.274 "driver_specific": {} 00:09:28.274 } 00:09:28.274 ] 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.274 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.275 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.275 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.275 "name": "Existed_Raid", 00:09:28.275 "uuid": "a535f941-0f58-45e2-8ac3-8605e1b127b1", 00:09:28.275 "strip_size_kb": 64, 00:09:28.275 "state": "online", 00:09:28.275 "raid_level": "concat", 00:09:28.275 "superblock": true, 00:09:28.275 "num_base_bdevs": 3, 00:09:28.275 "num_base_bdevs_discovered": 3, 00:09:28.275 "num_base_bdevs_operational": 3, 00:09:28.275 "base_bdevs_list": [ 00:09:28.275 { 00:09:28.275 "name": "BaseBdev1", 00:09:28.275 "uuid": "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f", 00:09:28.275 "is_configured": true, 00:09:28.275 "data_offset": 2048, 00:09:28.275 "data_size": 63488 00:09:28.275 }, 00:09:28.275 { 00:09:28.275 "name": "BaseBdev2", 00:09:28.275 "uuid": "24d182ac-8e27-4444-b7e2-8a2dd3e6d856", 00:09:28.275 "is_configured": true, 00:09:28.275 "data_offset": 2048, 00:09:28.275 "data_size": 63488 00:09:28.275 }, 00:09:28.275 { 00:09:28.275 "name": "BaseBdev3", 00:09:28.275 "uuid": "b3857821-697e-4139-b3b9-443f74b5d52b", 00:09:28.275 "is_configured": true, 00:09:28.275 "data_offset": 2048, 00:09:28.275 "data_size": 63488 00:09:28.275 } 00:09:28.275 ] 00:09:28.275 }' 00:09:28.275 12:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.275 12:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.841 [2024-11-06 12:40:17.346585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.841 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.841 "name": "Existed_Raid", 00:09:28.841 "aliases": [ 00:09:28.841 "a535f941-0f58-45e2-8ac3-8605e1b127b1" 00:09:28.841 ], 00:09:28.841 "product_name": "Raid Volume", 00:09:28.841 "block_size": 512, 00:09:28.841 "num_blocks": 190464, 00:09:28.841 "uuid": "a535f941-0f58-45e2-8ac3-8605e1b127b1", 00:09:28.841 "assigned_rate_limits": { 00:09:28.841 "rw_ios_per_sec": 0, 00:09:28.841 "rw_mbytes_per_sec": 0, 00:09:28.841 "r_mbytes_per_sec": 0, 00:09:28.841 "w_mbytes_per_sec": 0 00:09:28.841 }, 00:09:28.841 "claimed": false, 00:09:28.841 "zoned": false, 00:09:28.841 "supported_io_types": { 00:09:28.841 "read": true, 00:09:28.841 "write": true, 00:09:28.841 "unmap": true, 00:09:28.841 "flush": true, 00:09:28.841 "reset": true, 00:09:28.841 "nvme_admin": false, 00:09:28.841 "nvme_io": false, 00:09:28.841 "nvme_io_md": false, 00:09:28.841 "write_zeroes": true, 00:09:28.841 "zcopy": false, 00:09:28.841 "get_zone_info": false, 00:09:28.841 "zone_management": false, 00:09:28.841 "zone_append": false, 00:09:28.841 "compare": false, 00:09:28.841 "compare_and_write": false, 00:09:28.841 "abort": false, 00:09:28.841 "seek_hole": false, 00:09:28.841 "seek_data": false, 00:09:28.841 "copy": false, 00:09:28.841 "nvme_iov_md": false 00:09:28.841 }, 00:09:28.841 "memory_domains": [ 00:09:28.841 { 00:09:28.841 "dma_device_id": "system", 00:09:28.841 "dma_device_type": 1 00:09:28.841 }, 00:09:28.841 { 00:09:28.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.841 "dma_device_type": 2 00:09:28.841 }, 00:09:28.841 { 00:09:28.841 "dma_device_id": "system", 00:09:28.841 "dma_device_type": 1 00:09:28.841 }, 00:09:28.842 { 00:09:28.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.842 "dma_device_type": 2 00:09:28.842 }, 00:09:28.842 { 00:09:28.842 "dma_device_id": "system", 00:09:28.842 "dma_device_type": 1 00:09:28.842 }, 00:09:28.842 { 00:09:28.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.842 "dma_device_type": 2 00:09:28.842 } 00:09:28.842 ], 00:09:28.842 "driver_specific": { 00:09:28.842 "raid": { 00:09:28.842 "uuid": "a535f941-0f58-45e2-8ac3-8605e1b127b1", 00:09:28.842 "strip_size_kb": 64, 00:09:28.842 "state": "online", 00:09:28.842 "raid_level": "concat", 00:09:28.842 "superblock": true, 00:09:28.842 "num_base_bdevs": 3, 00:09:28.842 "num_base_bdevs_discovered": 3, 00:09:28.842 "num_base_bdevs_operational": 3, 00:09:28.842 "base_bdevs_list": [ 00:09:28.842 { 00:09:28.842 "name": "BaseBdev1", 00:09:28.842 "uuid": "4fba44a9-2eed-4bc7-a395-4b8551fa8a4f", 00:09:28.842 "is_configured": true, 00:09:28.842 "data_offset": 2048, 00:09:28.842 "data_size": 63488 00:09:28.842 }, 00:09:28.842 { 00:09:28.842 "name": "BaseBdev2", 00:09:28.842 "uuid": "24d182ac-8e27-4444-b7e2-8a2dd3e6d856", 00:09:28.842 "is_configured": true, 00:09:28.842 "data_offset": 2048, 00:09:28.842 "data_size": 63488 00:09:28.842 }, 00:09:28.842 { 00:09:28.842 "name": "BaseBdev3", 00:09:28.842 "uuid": "b3857821-697e-4139-b3b9-443f74b5d52b", 00:09:28.842 "is_configured": true, 00:09:28.842 "data_offset": 2048, 00:09:28.842 "data_size": 63488 00:09:28.842 } 00:09:28.842 ] 00:09:28.842 } 00:09:28.842 } 00:09:28.842 }' 00:09:28.842 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.842 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:28.842 BaseBdev2 00:09:28.842 BaseBdev3' 00:09:28.842 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.842 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.842 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.101 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.102 [2024-11-06 12:40:17.658325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.102 [2024-11-06 12:40:17.658370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.102 [2024-11-06 12:40:17.658450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.102 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.360 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.360 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.360 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.360 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.360 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.360 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.360 "name": "Existed_Raid", 00:09:29.361 "uuid": "a535f941-0f58-45e2-8ac3-8605e1b127b1", 00:09:29.361 "strip_size_kb": 64, 00:09:29.361 "state": "offline", 00:09:29.361 "raid_level": "concat", 00:09:29.361 "superblock": true, 00:09:29.361 "num_base_bdevs": 3, 00:09:29.361 "num_base_bdevs_discovered": 2, 00:09:29.361 "num_base_bdevs_operational": 2, 00:09:29.361 "base_bdevs_list": [ 00:09:29.361 { 00:09:29.361 "name": null, 00:09:29.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.361 "is_configured": false, 00:09:29.361 "data_offset": 0, 00:09:29.361 "data_size": 63488 00:09:29.361 }, 00:09:29.361 { 00:09:29.361 "name": "BaseBdev2", 00:09:29.361 "uuid": "24d182ac-8e27-4444-b7e2-8a2dd3e6d856", 00:09:29.361 "is_configured": true, 00:09:29.361 "data_offset": 2048, 00:09:29.361 "data_size": 63488 00:09:29.361 }, 00:09:29.361 { 00:09:29.361 "name": "BaseBdev3", 00:09:29.361 "uuid": "b3857821-697e-4139-b3b9-443f74b5d52b", 00:09:29.361 "is_configured": true, 00:09:29.361 "data_offset": 2048, 00:09:29.361 "data_size": 63488 00:09:29.361 } 00:09:29.361 ] 00:09:29.361 }' 00:09:29.361 12:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.361 12:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.620 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.620 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.879 [2024-11-06 12:40:18.367070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.879 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.879 [2024-11-06 12:40:18.515565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.879 [2024-11-06 12:40:18.515660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:30.138 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 BaseBdev2 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 [ 00:09:30.139 { 00:09:30.139 "name": "BaseBdev2", 00:09:30.139 "aliases": [ 00:09:30.139 "c6e779d1-f4d9-4206-8137-4867ffc11b78" 00:09:30.139 ], 00:09:30.139 "product_name": "Malloc disk", 00:09:30.139 "block_size": 512, 00:09:30.139 "num_blocks": 65536, 00:09:30.139 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:30.139 "assigned_rate_limits": { 00:09:30.139 "rw_ios_per_sec": 0, 00:09:30.139 "rw_mbytes_per_sec": 0, 00:09:30.139 "r_mbytes_per_sec": 0, 00:09:30.139 "w_mbytes_per_sec": 0 00:09:30.139 }, 00:09:30.139 "claimed": false, 00:09:30.139 "zoned": false, 00:09:30.139 "supported_io_types": { 00:09:30.139 "read": true, 00:09:30.139 "write": true, 00:09:30.139 "unmap": true, 00:09:30.139 "flush": true, 00:09:30.139 "reset": true, 00:09:30.139 "nvme_admin": false, 00:09:30.139 "nvme_io": false, 00:09:30.139 "nvme_io_md": false, 00:09:30.139 "write_zeroes": true, 00:09:30.139 "zcopy": true, 00:09:30.139 "get_zone_info": false, 00:09:30.139 "zone_management": false, 00:09:30.139 "zone_append": false, 00:09:30.139 "compare": false, 00:09:30.139 "compare_and_write": false, 00:09:30.139 "abort": true, 00:09:30.139 "seek_hole": false, 00:09:30.139 "seek_data": false, 00:09:30.139 "copy": true, 00:09:30.139 "nvme_iov_md": false 00:09:30.139 }, 00:09:30.139 "memory_domains": [ 00:09:30.139 { 00:09:30.139 "dma_device_id": "system", 00:09:30.139 "dma_device_type": 1 00:09:30.139 }, 00:09:30.139 { 00:09:30.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.139 "dma_device_type": 2 00:09:30.139 } 00:09:30.139 ], 00:09:30.139 "driver_specific": {} 00:09:30.139 } 00:09:30.139 ] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 BaseBdev3 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.139 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 [ 00:09:30.139 { 00:09:30.139 "name": "BaseBdev3", 00:09:30.139 "aliases": [ 00:09:30.139 "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57" 00:09:30.139 ], 00:09:30.139 "product_name": "Malloc disk", 00:09:30.139 "block_size": 512, 00:09:30.139 "num_blocks": 65536, 00:09:30.139 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:30.139 "assigned_rate_limits": { 00:09:30.139 "rw_ios_per_sec": 0, 00:09:30.139 "rw_mbytes_per_sec": 0, 00:09:30.139 "r_mbytes_per_sec": 0, 00:09:30.139 "w_mbytes_per_sec": 0 00:09:30.139 }, 00:09:30.139 "claimed": false, 00:09:30.139 "zoned": false, 00:09:30.139 "supported_io_types": { 00:09:30.139 "read": true, 00:09:30.139 "write": true, 00:09:30.139 "unmap": true, 00:09:30.139 "flush": true, 00:09:30.139 "reset": true, 00:09:30.139 "nvme_admin": false, 00:09:30.139 "nvme_io": false, 00:09:30.139 "nvme_io_md": false, 00:09:30.139 "write_zeroes": true, 00:09:30.139 "zcopy": true, 00:09:30.139 "get_zone_info": false, 00:09:30.139 "zone_management": false, 00:09:30.139 "zone_append": false, 00:09:30.139 "compare": false, 00:09:30.139 "compare_and_write": false, 00:09:30.139 "abort": true, 00:09:30.399 "seek_hole": false, 00:09:30.399 "seek_data": false, 00:09:30.399 "copy": true, 00:09:30.399 "nvme_iov_md": false 00:09:30.399 }, 00:09:30.399 "memory_domains": [ 00:09:30.399 { 00:09:30.399 "dma_device_id": "system", 00:09:30.399 "dma_device_type": 1 00:09:30.399 }, 00:09:30.399 { 00:09:30.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.399 "dma_device_type": 2 00:09:30.399 } 00:09:30.399 ], 00:09:30.399 "driver_specific": {} 00:09:30.399 } 00:09:30.399 ] 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.399 [2024-11-06 12:40:18.800733] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.399 [2024-11-06 12:40:18.800808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.399 [2024-11-06 12:40:18.800842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.399 [2024-11-06 12:40:18.803418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.399 "name": "Existed_Raid", 00:09:30.399 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:30.399 "strip_size_kb": 64, 00:09:30.399 "state": "configuring", 00:09:30.399 "raid_level": "concat", 00:09:30.399 "superblock": true, 00:09:30.399 "num_base_bdevs": 3, 00:09:30.399 "num_base_bdevs_discovered": 2, 00:09:30.399 "num_base_bdevs_operational": 3, 00:09:30.399 "base_bdevs_list": [ 00:09:30.399 { 00:09:30.399 "name": "BaseBdev1", 00:09:30.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.399 "is_configured": false, 00:09:30.399 "data_offset": 0, 00:09:30.399 "data_size": 0 00:09:30.399 }, 00:09:30.399 { 00:09:30.399 "name": "BaseBdev2", 00:09:30.399 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:30.399 "is_configured": true, 00:09:30.399 "data_offset": 2048, 00:09:30.399 "data_size": 63488 00:09:30.399 }, 00:09:30.399 { 00:09:30.399 "name": "BaseBdev3", 00:09:30.399 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:30.399 "is_configured": true, 00:09:30.399 "data_offset": 2048, 00:09:30.399 "data_size": 63488 00:09:30.399 } 00:09:30.399 ] 00:09:30.399 }' 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.399 12:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.967 [2024-11-06 12:40:19.336926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.967 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.967 "name": "Existed_Raid", 00:09:30.967 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:30.967 "strip_size_kb": 64, 00:09:30.967 "state": "configuring", 00:09:30.967 "raid_level": "concat", 00:09:30.967 "superblock": true, 00:09:30.967 "num_base_bdevs": 3, 00:09:30.967 "num_base_bdevs_discovered": 1, 00:09:30.967 "num_base_bdevs_operational": 3, 00:09:30.967 "base_bdevs_list": [ 00:09:30.967 { 00:09:30.967 "name": "BaseBdev1", 00:09:30.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.967 "is_configured": false, 00:09:30.967 "data_offset": 0, 00:09:30.967 "data_size": 0 00:09:30.967 }, 00:09:30.967 { 00:09:30.967 "name": null, 00:09:30.968 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:30.968 "is_configured": false, 00:09:30.968 "data_offset": 0, 00:09:30.968 "data_size": 63488 00:09:30.968 }, 00:09:30.968 { 00:09:30.968 "name": "BaseBdev3", 00:09:30.968 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:30.968 "is_configured": true, 00:09:30.968 "data_offset": 2048, 00:09:30.968 "data_size": 63488 00:09:30.968 } 00:09:30.968 ] 00:09:30.968 }' 00:09:30.968 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.968 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.230 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.494 [2024-11-06 12:40:19.921997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.494 BaseBdev1 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.494 [ 00:09:31.494 { 00:09:31.494 "name": "BaseBdev1", 00:09:31.494 "aliases": [ 00:09:31.494 "50b58796-7b18-4ed3-81ed-b2332dbaa48d" 00:09:31.494 ], 00:09:31.494 "product_name": "Malloc disk", 00:09:31.494 "block_size": 512, 00:09:31.494 "num_blocks": 65536, 00:09:31.494 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:31.494 "assigned_rate_limits": { 00:09:31.494 "rw_ios_per_sec": 0, 00:09:31.494 "rw_mbytes_per_sec": 0, 00:09:31.494 "r_mbytes_per_sec": 0, 00:09:31.494 "w_mbytes_per_sec": 0 00:09:31.494 }, 00:09:31.494 "claimed": true, 00:09:31.494 "claim_type": "exclusive_write", 00:09:31.494 "zoned": false, 00:09:31.494 "supported_io_types": { 00:09:31.494 "read": true, 00:09:31.494 "write": true, 00:09:31.494 "unmap": true, 00:09:31.494 "flush": true, 00:09:31.494 "reset": true, 00:09:31.494 "nvme_admin": false, 00:09:31.494 "nvme_io": false, 00:09:31.494 "nvme_io_md": false, 00:09:31.494 "write_zeroes": true, 00:09:31.494 "zcopy": true, 00:09:31.494 "get_zone_info": false, 00:09:31.494 "zone_management": false, 00:09:31.494 "zone_append": false, 00:09:31.494 "compare": false, 00:09:31.494 "compare_and_write": false, 00:09:31.494 "abort": true, 00:09:31.494 "seek_hole": false, 00:09:31.494 "seek_data": false, 00:09:31.494 "copy": true, 00:09:31.494 "nvme_iov_md": false 00:09:31.494 }, 00:09:31.494 "memory_domains": [ 00:09:31.494 { 00:09:31.494 "dma_device_id": "system", 00:09:31.494 "dma_device_type": 1 00:09:31.494 }, 00:09:31.494 { 00:09:31.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.494 "dma_device_type": 2 00:09:31.494 } 00:09:31.494 ], 00:09:31.494 "driver_specific": {} 00:09:31.494 } 00:09:31.494 ] 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.494 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.494 "name": "Existed_Raid", 00:09:31.494 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:31.494 "strip_size_kb": 64, 00:09:31.494 "state": "configuring", 00:09:31.494 "raid_level": "concat", 00:09:31.494 "superblock": true, 00:09:31.494 "num_base_bdevs": 3, 00:09:31.494 "num_base_bdevs_discovered": 2, 00:09:31.494 "num_base_bdevs_operational": 3, 00:09:31.494 "base_bdevs_list": [ 00:09:31.494 { 00:09:31.494 "name": "BaseBdev1", 00:09:31.494 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:31.494 "is_configured": true, 00:09:31.494 "data_offset": 2048, 00:09:31.494 "data_size": 63488 00:09:31.495 }, 00:09:31.495 { 00:09:31.495 "name": null, 00:09:31.495 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:31.495 "is_configured": false, 00:09:31.495 "data_offset": 0, 00:09:31.495 "data_size": 63488 00:09:31.495 }, 00:09:31.495 { 00:09:31.495 "name": "BaseBdev3", 00:09:31.495 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:31.495 "is_configured": true, 00:09:31.495 "data_offset": 2048, 00:09:31.495 "data_size": 63488 00:09:31.495 } 00:09:31.495 ] 00:09:31.495 }' 00:09:31.495 12:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.495 12:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.062 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.062 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.062 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.062 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.062 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.063 [2024-11-06 12:40:20.474285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.063 "name": "Existed_Raid", 00:09:32.063 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:32.063 "strip_size_kb": 64, 00:09:32.063 "state": "configuring", 00:09:32.063 "raid_level": "concat", 00:09:32.063 "superblock": true, 00:09:32.063 "num_base_bdevs": 3, 00:09:32.063 "num_base_bdevs_discovered": 1, 00:09:32.063 "num_base_bdevs_operational": 3, 00:09:32.063 "base_bdevs_list": [ 00:09:32.063 { 00:09:32.063 "name": "BaseBdev1", 00:09:32.063 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:32.063 "is_configured": true, 00:09:32.063 "data_offset": 2048, 00:09:32.063 "data_size": 63488 00:09:32.063 }, 00:09:32.063 { 00:09:32.063 "name": null, 00:09:32.063 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:32.063 "is_configured": false, 00:09:32.063 "data_offset": 0, 00:09:32.063 "data_size": 63488 00:09:32.063 }, 00:09:32.063 { 00:09:32.063 "name": null, 00:09:32.063 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:32.063 "is_configured": false, 00:09:32.063 "data_offset": 0, 00:09:32.063 "data_size": 63488 00:09:32.063 } 00:09:32.063 ] 00:09:32.063 }' 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.063 12:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.629 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.629 [2024-11-06 12:40:21.070472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.630 "name": "Existed_Raid", 00:09:32.630 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:32.630 "strip_size_kb": 64, 00:09:32.630 "state": "configuring", 00:09:32.630 "raid_level": "concat", 00:09:32.630 "superblock": true, 00:09:32.630 "num_base_bdevs": 3, 00:09:32.630 "num_base_bdevs_discovered": 2, 00:09:32.630 "num_base_bdevs_operational": 3, 00:09:32.630 "base_bdevs_list": [ 00:09:32.630 { 00:09:32.630 "name": "BaseBdev1", 00:09:32.630 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:32.630 "is_configured": true, 00:09:32.630 "data_offset": 2048, 00:09:32.630 "data_size": 63488 00:09:32.630 }, 00:09:32.630 { 00:09:32.630 "name": null, 00:09:32.630 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:32.630 "is_configured": false, 00:09:32.630 "data_offset": 0, 00:09:32.630 "data_size": 63488 00:09:32.630 }, 00:09:32.630 { 00:09:32.630 "name": "BaseBdev3", 00:09:32.630 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:32.630 "is_configured": true, 00:09:32.630 "data_offset": 2048, 00:09:32.630 "data_size": 63488 00:09:32.630 } 00:09:32.630 ] 00:09:32.630 }' 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.630 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.196 [2024-11-06 12:40:21.614678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.196 "name": "Existed_Raid", 00:09:33.196 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:33.196 "strip_size_kb": 64, 00:09:33.196 "state": "configuring", 00:09:33.196 "raid_level": "concat", 00:09:33.196 "superblock": true, 00:09:33.196 "num_base_bdevs": 3, 00:09:33.196 "num_base_bdevs_discovered": 1, 00:09:33.196 "num_base_bdevs_operational": 3, 00:09:33.196 "base_bdevs_list": [ 00:09:33.196 { 00:09:33.196 "name": null, 00:09:33.196 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:33.196 "is_configured": false, 00:09:33.196 "data_offset": 0, 00:09:33.196 "data_size": 63488 00:09:33.196 }, 00:09:33.196 { 00:09:33.196 "name": null, 00:09:33.196 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:33.196 "is_configured": false, 00:09:33.196 "data_offset": 0, 00:09:33.196 "data_size": 63488 00:09:33.196 }, 00:09:33.196 { 00:09:33.196 "name": "BaseBdev3", 00:09:33.196 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:33.196 "is_configured": true, 00:09:33.196 "data_offset": 2048, 00:09:33.196 "data_size": 63488 00:09:33.196 } 00:09:33.196 ] 00:09:33.196 }' 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.196 12:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.764 [2024-11-06 12:40:22.259612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.764 "name": "Existed_Raid", 00:09:33.764 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:33.764 "strip_size_kb": 64, 00:09:33.764 "state": "configuring", 00:09:33.764 "raid_level": "concat", 00:09:33.764 "superblock": true, 00:09:33.764 "num_base_bdevs": 3, 00:09:33.764 "num_base_bdevs_discovered": 2, 00:09:33.764 "num_base_bdevs_operational": 3, 00:09:33.764 "base_bdevs_list": [ 00:09:33.764 { 00:09:33.764 "name": null, 00:09:33.764 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:33.764 "is_configured": false, 00:09:33.764 "data_offset": 0, 00:09:33.764 "data_size": 63488 00:09:33.764 }, 00:09:33.764 { 00:09:33.764 "name": "BaseBdev2", 00:09:33.764 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:33.764 "is_configured": true, 00:09:33.764 "data_offset": 2048, 00:09:33.764 "data_size": 63488 00:09:33.764 }, 00:09:33.764 { 00:09:33.764 "name": "BaseBdev3", 00:09:33.764 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:33.764 "is_configured": true, 00:09:33.764 "data_offset": 2048, 00:09:33.764 "data_size": 63488 00:09:33.764 } 00:09:33.764 ] 00:09:33.764 }' 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.764 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.330 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.330 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.330 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50b58796-7b18-4ed3-81ed-b2332dbaa48d 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 [2024-11-06 12:40:22.865136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:34.331 [2024-11-06 12:40:22.865498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:34.331 [2024-11-06 12:40:22.865524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.331 NewBaseBdev 00:09:34.331 [2024-11-06 12:40:22.865849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:34.331 [2024-11-06 12:40:22.866028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:34.331 [2024-11-06 12:40:22.866044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:34.331 [2024-11-06 12:40:22.866236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 [ 00:09:34.331 { 00:09:34.331 "name": "NewBaseBdev", 00:09:34.331 "aliases": [ 00:09:34.331 "50b58796-7b18-4ed3-81ed-b2332dbaa48d" 00:09:34.331 ], 00:09:34.331 "product_name": "Malloc disk", 00:09:34.331 "block_size": 512, 00:09:34.331 "num_blocks": 65536, 00:09:34.331 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:34.331 "assigned_rate_limits": { 00:09:34.331 "rw_ios_per_sec": 0, 00:09:34.331 "rw_mbytes_per_sec": 0, 00:09:34.331 "r_mbytes_per_sec": 0, 00:09:34.331 "w_mbytes_per_sec": 0 00:09:34.331 }, 00:09:34.331 "claimed": true, 00:09:34.331 "claim_type": "exclusive_write", 00:09:34.331 "zoned": false, 00:09:34.331 "supported_io_types": { 00:09:34.331 "read": true, 00:09:34.331 "write": true, 00:09:34.331 "unmap": true, 00:09:34.331 "flush": true, 00:09:34.331 "reset": true, 00:09:34.331 "nvme_admin": false, 00:09:34.331 "nvme_io": false, 00:09:34.331 "nvme_io_md": false, 00:09:34.331 "write_zeroes": true, 00:09:34.331 "zcopy": true, 00:09:34.331 "get_zone_info": false, 00:09:34.331 "zone_management": false, 00:09:34.331 "zone_append": false, 00:09:34.331 "compare": false, 00:09:34.331 "compare_and_write": false, 00:09:34.331 "abort": true, 00:09:34.331 "seek_hole": false, 00:09:34.331 "seek_data": false, 00:09:34.331 "copy": true, 00:09:34.331 "nvme_iov_md": false 00:09:34.331 }, 00:09:34.331 "memory_domains": [ 00:09:34.331 { 00:09:34.331 "dma_device_id": "system", 00:09:34.331 "dma_device_type": 1 00:09:34.331 }, 00:09:34.331 { 00:09:34.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.331 "dma_device_type": 2 00:09:34.331 } 00:09:34.331 ], 00:09:34.331 "driver_specific": {} 00:09:34.331 } 00:09:34.331 ] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.331 "name": "Existed_Raid", 00:09:34.331 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:34.331 "strip_size_kb": 64, 00:09:34.331 "state": "online", 00:09:34.331 "raid_level": "concat", 00:09:34.331 "superblock": true, 00:09:34.331 "num_base_bdevs": 3, 00:09:34.331 "num_base_bdevs_discovered": 3, 00:09:34.331 "num_base_bdevs_operational": 3, 00:09:34.331 "base_bdevs_list": [ 00:09:34.331 { 00:09:34.331 "name": "NewBaseBdev", 00:09:34.331 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:34.331 "is_configured": true, 00:09:34.331 "data_offset": 2048, 00:09:34.331 "data_size": 63488 00:09:34.331 }, 00:09:34.331 { 00:09:34.331 "name": "BaseBdev2", 00:09:34.331 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:34.331 "is_configured": true, 00:09:34.331 "data_offset": 2048, 00:09:34.331 "data_size": 63488 00:09:34.331 }, 00:09:34.331 { 00:09:34.331 "name": "BaseBdev3", 00:09:34.331 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:34.331 "is_configured": true, 00:09:34.331 "data_offset": 2048, 00:09:34.331 "data_size": 63488 00:09:34.331 } 00:09:34.331 ] 00:09:34.331 }' 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.331 12:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.898 [2024-11-06 12:40:23.413800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.898 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.898 "name": "Existed_Raid", 00:09:34.898 "aliases": [ 00:09:34.898 "21ae91e5-f2e4-4ab3-b028-903c8949eb44" 00:09:34.898 ], 00:09:34.898 "product_name": "Raid Volume", 00:09:34.898 "block_size": 512, 00:09:34.898 "num_blocks": 190464, 00:09:34.898 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:34.898 "assigned_rate_limits": { 00:09:34.898 "rw_ios_per_sec": 0, 00:09:34.898 "rw_mbytes_per_sec": 0, 00:09:34.898 "r_mbytes_per_sec": 0, 00:09:34.898 "w_mbytes_per_sec": 0 00:09:34.898 }, 00:09:34.898 "claimed": false, 00:09:34.898 "zoned": false, 00:09:34.898 "supported_io_types": { 00:09:34.898 "read": true, 00:09:34.898 "write": true, 00:09:34.898 "unmap": true, 00:09:34.898 "flush": true, 00:09:34.898 "reset": true, 00:09:34.898 "nvme_admin": false, 00:09:34.898 "nvme_io": false, 00:09:34.898 "nvme_io_md": false, 00:09:34.898 "write_zeroes": true, 00:09:34.898 "zcopy": false, 00:09:34.898 "get_zone_info": false, 00:09:34.898 "zone_management": false, 00:09:34.898 "zone_append": false, 00:09:34.898 "compare": false, 00:09:34.898 "compare_and_write": false, 00:09:34.898 "abort": false, 00:09:34.898 "seek_hole": false, 00:09:34.898 "seek_data": false, 00:09:34.898 "copy": false, 00:09:34.898 "nvme_iov_md": false 00:09:34.898 }, 00:09:34.898 "memory_domains": [ 00:09:34.898 { 00:09:34.898 "dma_device_id": "system", 00:09:34.898 "dma_device_type": 1 00:09:34.898 }, 00:09:34.898 { 00:09:34.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.898 "dma_device_type": 2 00:09:34.898 }, 00:09:34.898 { 00:09:34.898 "dma_device_id": "system", 00:09:34.898 "dma_device_type": 1 00:09:34.898 }, 00:09:34.898 { 00:09:34.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.898 "dma_device_type": 2 00:09:34.898 }, 00:09:34.898 { 00:09:34.898 "dma_device_id": "system", 00:09:34.898 "dma_device_type": 1 00:09:34.898 }, 00:09:34.898 { 00:09:34.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.898 "dma_device_type": 2 00:09:34.898 } 00:09:34.898 ], 00:09:34.898 "driver_specific": { 00:09:34.898 "raid": { 00:09:34.898 "uuid": "21ae91e5-f2e4-4ab3-b028-903c8949eb44", 00:09:34.898 "strip_size_kb": 64, 00:09:34.898 "state": "online", 00:09:34.898 "raid_level": "concat", 00:09:34.898 "superblock": true, 00:09:34.898 "num_base_bdevs": 3, 00:09:34.898 "num_base_bdevs_discovered": 3, 00:09:34.898 "num_base_bdevs_operational": 3, 00:09:34.898 "base_bdevs_list": [ 00:09:34.898 { 00:09:34.898 "name": "NewBaseBdev", 00:09:34.898 "uuid": "50b58796-7b18-4ed3-81ed-b2332dbaa48d", 00:09:34.898 "is_configured": true, 00:09:34.898 "data_offset": 2048, 00:09:34.898 "data_size": 63488 00:09:34.898 }, 00:09:34.898 { 00:09:34.898 "name": "BaseBdev2", 00:09:34.899 "uuid": "c6e779d1-f4d9-4206-8137-4867ffc11b78", 00:09:34.899 "is_configured": true, 00:09:34.899 "data_offset": 2048, 00:09:34.899 "data_size": 63488 00:09:34.899 }, 00:09:34.899 { 00:09:34.899 "name": "BaseBdev3", 00:09:34.899 "uuid": "f0b1a0b3-0dc2-4416-9c5c-4a5be4ef3f57", 00:09:34.899 "is_configured": true, 00:09:34.899 "data_offset": 2048, 00:09:34.899 "data_size": 63488 00:09:34.899 } 00:09:34.899 ] 00:09:34.899 } 00:09:34.899 } 00:09:34.899 }' 00:09:34.899 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.899 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:34.899 BaseBdev2 00:09:34.899 BaseBdev3' 00:09:34.899 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.201 [2024-11-06 12:40:23.753543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.201 [2024-11-06 12:40:23.753601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.201 [2024-11-06 12:40:23.753723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.201 [2024-11-06 12:40:23.753809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.201 [2024-11-06 12:40:23.753830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66276 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66276 ']' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66276 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66276 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:35.201 killing process with pid 66276 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66276' 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66276 00:09:35.201 12:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66276 00:09:35.201 [2024-11-06 12:40:23.787539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.460 [2024-11-06 12:40:24.078275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.832 12:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:36.832 00:09:36.832 real 0m11.772s 00:09:36.832 user 0m19.365s 00:09:36.832 sys 0m1.664s 00:09:36.832 12:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.832 ************************************ 00:09:36.832 END TEST raid_state_function_test_sb 00:09:36.832 ************************************ 00:09:36.832 12:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 12:40:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:36.832 12:40:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:36.832 12:40:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.832 12:40:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 ************************************ 00:09:36.832 START TEST raid_superblock_test 00:09:36.832 ************************************ 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66907 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66907 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66907 ']' 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.832 12:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.832 [2024-11-06 12:40:25.347366] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:36.832 [2024-11-06 12:40:25.347767] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66907 ] 00:09:37.091 [2024-11-06 12:40:25.525057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.091 [2024-11-06 12:40:25.671787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.349 [2024-11-06 12:40:25.895770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.350 [2024-11-06 12:40:25.895887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 malloc1 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 [2024-11-06 12:40:26.374225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:37.918 [2024-11-06 12:40:26.374596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.918 [2024-11-06 12:40:26.374654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:37.918 [2024-11-06 12:40:26.374687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.918 [2024-11-06 12:40:26.377888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.918 [2024-11-06 12:40:26.377935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:37.918 pt1 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 malloc2 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 [2024-11-06 12:40:26.434470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:37.918 [2024-11-06 12:40:26.434817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.918 [2024-11-06 12:40:26.434907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:37.918 [2024-11-06 12:40:26.435107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.918 [2024-11-06 12:40:26.438241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.918 [2024-11-06 12:40:26.438398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:37.918 pt2 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 malloc3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 [2024-11-06 12:40:26.507781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:37.918 [2024-11-06 12:40:26.508094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.918 [2024-11-06 12:40:26.508177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:37.918 [2024-11-06 12:40:26.508214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.918 [2024-11-06 12:40:26.511202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.918 [2024-11-06 12:40:26.511242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:37.918 pt3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 [2024-11-06 12:40:26.519900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:37.918 [2024-11-06 12:40:26.522484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:37.918 [2024-11-06 12:40:26.522724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:37.918 [2024-11-06 12:40:26.522952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:37.918 [2024-11-06 12:40:26.522976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:37.918 [2024-11-06 12:40:26.523312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:37.918 [2024-11-06 12:40:26.523549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:37.918 [2024-11-06 12:40:26.523566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:37.918 [2024-11-06 12:40:26.523810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.918 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.177 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.177 "name": "raid_bdev1", 00:09:38.177 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:38.177 "strip_size_kb": 64, 00:09:38.177 "state": "online", 00:09:38.177 "raid_level": "concat", 00:09:38.177 "superblock": true, 00:09:38.177 "num_base_bdevs": 3, 00:09:38.177 "num_base_bdevs_discovered": 3, 00:09:38.177 "num_base_bdevs_operational": 3, 00:09:38.177 "base_bdevs_list": [ 00:09:38.177 { 00:09:38.177 "name": "pt1", 00:09:38.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.177 "is_configured": true, 00:09:38.177 "data_offset": 2048, 00:09:38.177 "data_size": 63488 00:09:38.177 }, 00:09:38.177 { 00:09:38.177 "name": "pt2", 00:09:38.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.177 "is_configured": true, 00:09:38.177 "data_offset": 2048, 00:09:38.177 "data_size": 63488 00:09:38.177 }, 00:09:38.177 { 00:09:38.177 "name": "pt3", 00:09:38.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.177 "is_configured": true, 00:09:38.177 "data_offset": 2048, 00:09:38.177 "data_size": 63488 00:09:38.177 } 00:09:38.177 ] 00:09:38.177 }' 00:09:38.177 12:40:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.177 12:40:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.435 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:38.435 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:38.435 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.435 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.436 [2024-11-06 12:40:27.052473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.436 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.695 "name": "raid_bdev1", 00:09:38.695 "aliases": [ 00:09:38.695 "82dc2736-5516-4582-a0ce-7e53af7e24e0" 00:09:38.695 ], 00:09:38.695 "product_name": "Raid Volume", 00:09:38.695 "block_size": 512, 00:09:38.695 "num_blocks": 190464, 00:09:38.695 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:38.695 "assigned_rate_limits": { 00:09:38.695 "rw_ios_per_sec": 0, 00:09:38.695 "rw_mbytes_per_sec": 0, 00:09:38.695 "r_mbytes_per_sec": 0, 00:09:38.695 "w_mbytes_per_sec": 0 00:09:38.695 }, 00:09:38.695 "claimed": false, 00:09:38.695 "zoned": false, 00:09:38.695 "supported_io_types": { 00:09:38.695 "read": true, 00:09:38.695 "write": true, 00:09:38.695 "unmap": true, 00:09:38.695 "flush": true, 00:09:38.695 "reset": true, 00:09:38.695 "nvme_admin": false, 00:09:38.695 "nvme_io": false, 00:09:38.695 "nvme_io_md": false, 00:09:38.695 "write_zeroes": true, 00:09:38.695 "zcopy": false, 00:09:38.695 "get_zone_info": false, 00:09:38.695 "zone_management": false, 00:09:38.695 "zone_append": false, 00:09:38.695 "compare": false, 00:09:38.695 "compare_and_write": false, 00:09:38.695 "abort": false, 00:09:38.695 "seek_hole": false, 00:09:38.695 "seek_data": false, 00:09:38.695 "copy": false, 00:09:38.695 "nvme_iov_md": false 00:09:38.695 }, 00:09:38.695 "memory_domains": [ 00:09:38.695 { 00:09:38.695 "dma_device_id": "system", 00:09:38.695 "dma_device_type": 1 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.695 "dma_device_type": 2 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "dma_device_id": "system", 00:09:38.695 "dma_device_type": 1 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.695 "dma_device_type": 2 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "dma_device_id": "system", 00:09:38.695 "dma_device_type": 1 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.695 "dma_device_type": 2 00:09:38.695 } 00:09:38.695 ], 00:09:38.695 "driver_specific": { 00:09:38.695 "raid": { 00:09:38.695 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:38.695 "strip_size_kb": 64, 00:09:38.695 "state": "online", 00:09:38.695 "raid_level": "concat", 00:09:38.695 "superblock": true, 00:09:38.695 "num_base_bdevs": 3, 00:09:38.695 "num_base_bdevs_discovered": 3, 00:09:38.695 "num_base_bdevs_operational": 3, 00:09:38.695 "base_bdevs_list": [ 00:09:38.695 { 00:09:38.695 "name": "pt1", 00:09:38.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.695 "is_configured": true, 00:09:38.695 "data_offset": 2048, 00:09:38.695 "data_size": 63488 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "name": "pt2", 00:09:38.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.695 "is_configured": true, 00:09:38.695 "data_offset": 2048, 00:09:38.695 "data_size": 63488 00:09:38.695 }, 00:09:38.695 { 00:09:38.695 "name": "pt3", 00:09:38.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.695 "is_configured": true, 00:09:38.695 "data_offset": 2048, 00:09:38.695 "data_size": 63488 00:09:38.695 } 00:09:38.695 ] 00:09:38.695 } 00:09:38.695 } 00:09:38.695 }' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:38.695 pt2 00:09:38.695 pt3' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.695 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.954 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.954 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.954 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.954 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 [2024-11-06 12:40:27.396516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82dc2736-5516-4582-a0ce-7e53af7e24e0 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 82dc2736-5516-4582-a0ce-7e53af7e24e0 ']' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 [2024-11-06 12:40:27.448162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.955 [2024-11-06 12:40:27.448338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.955 [2024-11-06 12:40:27.448578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.955 [2024-11-06 12:40:27.448804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.955 [2024-11-06 12:40:27.448832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.955 [2024-11-06 12:40:27.592307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:38.955 [2024-11-06 12:40:27.595201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:38.955 [2024-11-06 12:40:27.595418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:38.955 [2024-11-06 12:40:27.595514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:38.955 [2024-11-06 12:40:27.595597] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:38.955 [2024-11-06 12:40:27.595633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:38.955 [2024-11-06 12:40:27.595664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:38.955 [2024-11-06 12:40:27.595679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:38.955 request: 00:09:38.955 { 00:09:38.955 "name": "raid_bdev1", 00:09:38.955 "raid_level": "concat", 00:09:38.955 "base_bdevs": [ 00:09:38.955 "malloc1", 00:09:38.955 "malloc2", 00:09:38.955 "malloc3" 00:09:38.955 ], 00:09:38.955 "strip_size_kb": 64, 00:09:38.955 "superblock": false, 00:09:38.955 "method": "bdev_raid_create", 00:09:38.955 "req_id": 1 00:09:38.955 } 00:09:38.955 Got JSON-RPC error response 00:09:38.955 response: 00:09:38.955 { 00:09:38.955 "code": -17, 00:09:38.955 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:38.955 } 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:38.955 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.215 [2024-11-06 12:40:27.656248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.215 [2024-11-06 12:40:27.656441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.215 [2024-11-06 12:40:27.656592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:39.215 [2024-11-06 12:40:27.656719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.215 [2024-11-06 12:40:27.659911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.215 [2024-11-06 12:40:27.660074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.215 [2024-11-06 12:40:27.660322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:39.215 [2024-11-06 12:40:27.660499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.215 pt1 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.215 "name": "raid_bdev1", 00:09:39.215 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:39.215 "strip_size_kb": 64, 00:09:39.215 "state": "configuring", 00:09:39.215 "raid_level": "concat", 00:09:39.215 "superblock": true, 00:09:39.215 "num_base_bdevs": 3, 00:09:39.215 "num_base_bdevs_discovered": 1, 00:09:39.215 "num_base_bdevs_operational": 3, 00:09:39.215 "base_bdevs_list": [ 00:09:39.215 { 00:09:39.215 "name": "pt1", 00:09:39.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.215 "is_configured": true, 00:09:39.215 "data_offset": 2048, 00:09:39.215 "data_size": 63488 00:09:39.215 }, 00:09:39.215 { 00:09:39.215 "name": null, 00:09:39.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.215 "is_configured": false, 00:09:39.215 "data_offset": 2048, 00:09:39.215 "data_size": 63488 00:09:39.215 }, 00:09:39.215 { 00:09:39.215 "name": null, 00:09:39.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.215 "is_configured": false, 00:09:39.215 "data_offset": 2048, 00:09:39.215 "data_size": 63488 00:09:39.215 } 00:09:39.215 ] 00:09:39.215 }' 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.215 12:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.782 [2024-11-06 12:40:28.184583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.782 [2024-11-06 12:40:28.184817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.782 [2024-11-06 12:40:28.184903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:39.782 [2024-11-06 12:40:28.185187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.782 [2024-11-06 12:40:28.185837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.782 [2024-11-06 12:40:28.185871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.782 [2024-11-06 12:40:28.185995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:39.782 [2024-11-06 12:40:28.186031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.782 pt2 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.782 [2024-11-06 12:40:28.192836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:39.782 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.783 "name": "raid_bdev1", 00:09:39.783 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:39.783 "strip_size_kb": 64, 00:09:39.783 "state": "configuring", 00:09:39.783 "raid_level": "concat", 00:09:39.783 "superblock": true, 00:09:39.783 "num_base_bdevs": 3, 00:09:39.783 "num_base_bdevs_discovered": 1, 00:09:39.783 "num_base_bdevs_operational": 3, 00:09:39.783 "base_bdevs_list": [ 00:09:39.783 { 00:09:39.783 "name": "pt1", 00:09:39.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.783 "is_configured": true, 00:09:39.783 "data_offset": 2048, 00:09:39.783 "data_size": 63488 00:09:39.783 }, 00:09:39.783 { 00:09:39.783 "name": null, 00:09:39.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.783 "is_configured": false, 00:09:39.783 "data_offset": 0, 00:09:39.783 "data_size": 63488 00:09:39.783 }, 00:09:39.783 { 00:09:39.783 "name": null, 00:09:39.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.783 "is_configured": false, 00:09:39.783 "data_offset": 2048, 00:09:39.783 "data_size": 63488 00:09:39.783 } 00:09:39.783 ] 00:09:39.783 }' 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.783 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 [2024-11-06 12:40:28.720692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.350 [2024-11-06 12:40:28.720986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.350 [2024-11-06 12:40:28.721152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:40.350 [2024-11-06 12:40:28.721214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.350 [2024-11-06 12:40:28.721849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.350 [2024-11-06 12:40:28.721895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.350 [2024-11-06 12:40:28.722013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.350 [2024-11-06 12:40:28.722058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.350 pt2 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.350 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 [2024-11-06 12:40:28.728656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.350 [2024-11-06 12:40:28.728725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.350 [2024-11-06 12:40:28.728753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:40.351 [2024-11-06 12:40:28.728775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.351 [2024-11-06 12:40:28.729282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.351 [2024-11-06 12:40:28.729327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.351 [2024-11-06 12:40:28.729412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:40.351 [2024-11-06 12:40:28.729451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.351 [2024-11-06 12:40:28.729620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.351 [2024-11-06 12:40:28.729653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.351 [2024-11-06 12:40:28.729979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:40.351 [2024-11-06 12:40:28.730185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.351 [2024-11-06 12:40:28.730227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.351 [2024-11-06 12:40:28.730406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.351 pt3 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.351 "name": "raid_bdev1", 00:09:40.351 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:40.351 "strip_size_kb": 64, 00:09:40.351 "state": "online", 00:09:40.351 "raid_level": "concat", 00:09:40.351 "superblock": true, 00:09:40.351 "num_base_bdevs": 3, 00:09:40.351 "num_base_bdevs_discovered": 3, 00:09:40.351 "num_base_bdevs_operational": 3, 00:09:40.351 "base_bdevs_list": [ 00:09:40.351 { 00:09:40.351 "name": "pt1", 00:09:40.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.351 "is_configured": true, 00:09:40.351 "data_offset": 2048, 00:09:40.351 "data_size": 63488 00:09:40.351 }, 00:09:40.351 { 00:09:40.351 "name": "pt2", 00:09:40.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.351 "is_configured": true, 00:09:40.351 "data_offset": 2048, 00:09:40.351 "data_size": 63488 00:09:40.351 }, 00:09:40.351 { 00:09:40.351 "name": "pt3", 00:09:40.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.351 "is_configured": true, 00:09:40.351 "data_offset": 2048, 00:09:40.351 "data_size": 63488 00:09:40.351 } 00:09:40.351 ] 00:09:40.351 }' 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.351 12:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.609 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 [2024-11-06 12:40:29.257267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.869 "name": "raid_bdev1", 00:09:40.869 "aliases": [ 00:09:40.869 "82dc2736-5516-4582-a0ce-7e53af7e24e0" 00:09:40.869 ], 00:09:40.869 "product_name": "Raid Volume", 00:09:40.869 "block_size": 512, 00:09:40.869 "num_blocks": 190464, 00:09:40.869 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:40.869 "assigned_rate_limits": { 00:09:40.869 "rw_ios_per_sec": 0, 00:09:40.869 "rw_mbytes_per_sec": 0, 00:09:40.869 "r_mbytes_per_sec": 0, 00:09:40.869 "w_mbytes_per_sec": 0 00:09:40.869 }, 00:09:40.869 "claimed": false, 00:09:40.869 "zoned": false, 00:09:40.869 "supported_io_types": { 00:09:40.869 "read": true, 00:09:40.869 "write": true, 00:09:40.869 "unmap": true, 00:09:40.869 "flush": true, 00:09:40.869 "reset": true, 00:09:40.869 "nvme_admin": false, 00:09:40.869 "nvme_io": false, 00:09:40.869 "nvme_io_md": false, 00:09:40.869 "write_zeroes": true, 00:09:40.869 "zcopy": false, 00:09:40.869 "get_zone_info": false, 00:09:40.869 "zone_management": false, 00:09:40.869 "zone_append": false, 00:09:40.869 "compare": false, 00:09:40.869 "compare_and_write": false, 00:09:40.869 "abort": false, 00:09:40.869 "seek_hole": false, 00:09:40.869 "seek_data": false, 00:09:40.869 "copy": false, 00:09:40.869 "nvme_iov_md": false 00:09:40.869 }, 00:09:40.869 "memory_domains": [ 00:09:40.869 { 00:09:40.869 "dma_device_id": "system", 00:09:40.869 "dma_device_type": 1 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.869 "dma_device_type": 2 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "dma_device_id": "system", 00:09:40.869 "dma_device_type": 1 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.869 "dma_device_type": 2 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "dma_device_id": "system", 00:09:40.869 "dma_device_type": 1 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.869 "dma_device_type": 2 00:09:40.869 } 00:09:40.869 ], 00:09:40.869 "driver_specific": { 00:09:40.869 "raid": { 00:09:40.869 "uuid": "82dc2736-5516-4582-a0ce-7e53af7e24e0", 00:09:40.869 "strip_size_kb": 64, 00:09:40.869 "state": "online", 00:09:40.869 "raid_level": "concat", 00:09:40.869 "superblock": true, 00:09:40.869 "num_base_bdevs": 3, 00:09:40.869 "num_base_bdevs_discovered": 3, 00:09:40.869 "num_base_bdevs_operational": 3, 00:09:40.869 "base_bdevs_list": [ 00:09:40.869 { 00:09:40.869 "name": "pt1", 00:09:40.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.869 "is_configured": true, 00:09:40.869 "data_offset": 2048, 00:09:40.869 "data_size": 63488 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "name": "pt2", 00:09:40.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.869 "is_configured": true, 00:09:40.869 "data_offset": 2048, 00:09:40.869 "data_size": 63488 00:09:40.869 }, 00:09:40.869 { 00:09:40.869 "name": "pt3", 00:09:40.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.869 "is_configured": true, 00:09:40.869 "data_offset": 2048, 00:09:40.869 "data_size": 63488 00:09:40.869 } 00:09:40.869 ] 00:09:40.869 } 00:09:40.869 } 00:09:40.869 }' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.869 pt2 00:09:40.869 pt3' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.869 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 [2024-11-06 12:40:29.565311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 82dc2736-5516-4582-a0ce-7e53af7e24e0 '!=' 82dc2736-5516-4582-a0ce-7e53af7e24e0 ']' 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66907 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66907 ']' 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66907 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:41.128 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.129 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66907 00:09:41.129 killing process with pid 66907 00:09:41.129 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:41.129 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:41.129 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66907' 00:09:41.129 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66907 00:09:41.129 [2024-11-06 12:40:29.656777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.129 12:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66907 00:09:41.129 [2024-11-06 12:40:29.656933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.129 [2024-11-06 12:40:29.657034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.129 [2024-11-06 12:40:29.657059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:41.387 [2024-11-06 12:40:29.936992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.365 12:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:42.365 00:09:42.365 real 0m5.740s 00:09:42.365 user 0m8.529s 00:09:42.365 sys 0m0.928s 00:09:42.365 12:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.365 12:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.365 ************************************ 00:09:42.365 END TEST raid_superblock_test 00:09:42.365 ************************************ 00:09:42.625 12:40:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:42.625 12:40:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:42.625 12:40:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.625 12:40:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.625 ************************************ 00:09:42.625 START TEST raid_read_error_test 00:09:42.625 ************************************ 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gGtTrMfGGP 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67166 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67166 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67166 ']' 00:09:42.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.625 12:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.625 [2024-11-06 12:40:31.172974] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:42.625 [2024-11-06 12:40:31.173175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67166 ] 00:09:42.883 [2024-11-06 12:40:31.362906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.883 [2024-11-06 12:40:31.495506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.141 [2024-11-06 12:40:31.699614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.141 [2024-11-06 12:40:31.699702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.708 BaseBdev1_malloc 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.708 true 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.708 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 [2024-11-06 12:40:32.211083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.709 [2024-11-06 12:40:32.211221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.709 [2024-11-06 12:40:32.211256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.709 [2024-11-06 12:40:32.211279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.709 [2024-11-06 12:40:32.214341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.709 [2024-11-06 12:40:32.214399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.709 BaseBdev1 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 BaseBdev2_malloc 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 true 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 [2024-11-06 12:40:32.273144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.709 [2024-11-06 12:40:32.273501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.709 [2024-11-06 12:40:32.273545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.709 [2024-11-06 12:40:32.273568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.709 [2024-11-06 12:40:32.276553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.709 [2024-11-06 12:40:32.276616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.709 BaseBdev2 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 BaseBdev3_malloc 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 true 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 [2024-11-06 12:40:32.348413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:43.709 [2024-11-06 12:40:32.348512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.709 [2024-11-06 12:40:32.348544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:43.709 [2024-11-06 12:40:32.348566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.709 [2024-11-06 12:40:32.351464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.709 [2024-11-06 12:40:32.351522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:43.709 BaseBdev3 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.709 [2024-11-06 12:40:32.356535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.709 [2024-11-06 12:40:32.359221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.709 [2024-11-06 12:40:32.359408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.709 [2024-11-06 12:40:32.359717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:43.709 [2024-11-06 12:40:32.359739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.709 [2024-11-06 12:40:32.360101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:43.709 [2024-11-06 12:40:32.360388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:43.709 [2024-11-06 12:40:32.360417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:43.709 [2024-11-06 12:40:32.360698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.709 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.968 "name": "raid_bdev1", 00:09:43.968 "uuid": "90cd74a6-39ff-455f-8cd0-c4788316799c", 00:09:43.968 "strip_size_kb": 64, 00:09:43.968 "state": "online", 00:09:43.968 "raid_level": "concat", 00:09:43.968 "superblock": true, 00:09:43.968 "num_base_bdevs": 3, 00:09:43.968 "num_base_bdevs_discovered": 3, 00:09:43.968 "num_base_bdevs_operational": 3, 00:09:43.968 "base_bdevs_list": [ 00:09:43.968 { 00:09:43.968 "name": "BaseBdev1", 00:09:43.968 "uuid": "e3a59bce-f25a-588e-908c-cd42bb3d0419", 00:09:43.968 "is_configured": true, 00:09:43.968 "data_offset": 2048, 00:09:43.968 "data_size": 63488 00:09:43.968 }, 00:09:43.968 { 00:09:43.968 "name": "BaseBdev2", 00:09:43.968 "uuid": "0764b83f-51fe-52c2-bd55-477412929f0b", 00:09:43.968 "is_configured": true, 00:09:43.968 "data_offset": 2048, 00:09:43.968 "data_size": 63488 00:09:43.968 }, 00:09:43.968 { 00:09:43.968 "name": "BaseBdev3", 00:09:43.968 "uuid": "90d73f03-358d-5703-8972-48e461d886f5", 00:09:43.968 "is_configured": true, 00:09:43.968 "data_offset": 2048, 00:09:43.968 "data_size": 63488 00:09:43.968 } 00:09:43.968 ] 00:09:43.968 }' 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.968 12:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.534 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.534 12:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.534 [2024-11-06 12:40:32.990342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.481 "name": "raid_bdev1", 00:09:45.481 "uuid": "90cd74a6-39ff-455f-8cd0-c4788316799c", 00:09:45.481 "strip_size_kb": 64, 00:09:45.481 "state": "online", 00:09:45.481 "raid_level": "concat", 00:09:45.481 "superblock": true, 00:09:45.481 "num_base_bdevs": 3, 00:09:45.481 "num_base_bdevs_discovered": 3, 00:09:45.481 "num_base_bdevs_operational": 3, 00:09:45.481 "base_bdevs_list": [ 00:09:45.481 { 00:09:45.481 "name": "BaseBdev1", 00:09:45.481 "uuid": "e3a59bce-f25a-588e-908c-cd42bb3d0419", 00:09:45.481 "is_configured": true, 00:09:45.481 "data_offset": 2048, 00:09:45.481 "data_size": 63488 00:09:45.481 }, 00:09:45.481 { 00:09:45.481 "name": "BaseBdev2", 00:09:45.481 "uuid": "0764b83f-51fe-52c2-bd55-477412929f0b", 00:09:45.481 "is_configured": true, 00:09:45.481 "data_offset": 2048, 00:09:45.481 "data_size": 63488 00:09:45.481 }, 00:09:45.481 { 00:09:45.481 "name": "BaseBdev3", 00:09:45.481 "uuid": "90d73f03-358d-5703-8972-48e461d886f5", 00:09:45.481 "is_configured": true, 00:09:45.481 "data_offset": 2048, 00:09:45.481 "data_size": 63488 00:09:45.481 } 00:09:45.481 ] 00:09:45.481 }' 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.481 12:40:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.047 12:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.047 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.047 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.047 [2024-11-06 12:40:34.409249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.047 [2024-11-06 12:40:34.409305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.047 [2024-11-06 12:40:34.412765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.047 [2024-11-06 12:40:34.412840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.047 [2024-11-06 12:40:34.412901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.047 [2024-11-06 12:40:34.412923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:46.048 { 00:09:46.048 "results": [ 00:09:46.048 { 00:09:46.048 "job": "raid_bdev1", 00:09:46.048 "core_mask": "0x1", 00:09:46.048 "workload": "randrw", 00:09:46.048 "percentage": 50, 00:09:46.048 "status": "finished", 00:09:46.048 "queue_depth": 1, 00:09:46.048 "io_size": 131072, 00:09:46.048 "runtime": 1.416416, 00:09:46.048 "iops": 10078.959853602331, 00:09:46.048 "mibps": 1259.8699817002914, 00:09:46.048 "io_failed": 1, 00:09:46.048 "io_timeout": 0, 00:09:46.048 "avg_latency_us": 137.9778659891625, 00:09:46.048 "min_latency_us": 40.261818181818185, 00:09:46.048 "max_latency_us": 1869.2654545454545 00:09:46.048 } 00:09:46.048 ], 00:09:46.048 "core_count": 1 00:09:46.048 } 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67166 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67166 ']' 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67166 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67166 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67166' 00:09:46.048 killing process with pid 67166 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67166 00:09:46.048 [2024-11-06 12:40:34.450980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.048 12:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67166 00:09:46.048 [2024-11-06 12:40:34.660349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gGtTrMfGGP 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:47.424 00:09:47.424 real 0m4.721s 00:09:47.424 user 0m5.790s 00:09:47.424 sys 0m0.620s 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.424 ************************************ 00:09:47.424 END TEST raid_read_error_test 00:09:47.424 ************************************ 00:09:47.424 12:40:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.424 12:40:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:47.424 12:40:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:47.424 12:40:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.424 12:40:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.424 ************************************ 00:09:47.424 START TEST raid_write_error_test 00:09:47.424 ************************************ 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qlK21zbFn6 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67317 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67317 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67317 ']' 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.424 12:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.424 [2024-11-06 12:40:35.950824] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:47.424 [2024-11-06 12:40:35.951021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67317 ] 00:09:47.684 [2024-11-06 12:40:36.137253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.684 [2024-11-06 12:40:36.271817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.942 [2024-11-06 12:40:36.480963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.942 [2024-11-06 12:40:36.481068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 BaseBdev1_malloc 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 true 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 [2024-11-06 12:40:36.944021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.512 [2024-11-06 12:40:36.944096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.512 [2024-11-06 12:40:36.944132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.512 [2024-11-06 12:40:36.944203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.512 [2024-11-06 12:40:36.947320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.512 [2024-11-06 12:40:36.947391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.512 BaseBdev1 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 BaseBdev2_malloc 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 true 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 [2024-11-06 12:40:37.001057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.512 [2024-11-06 12:40:37.001134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.512 [2024-11-06 12:40:37.001180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.512 [2024-11-06 12:40:37.001202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.512 [2024-11-06 12:40:37.004013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.512 [2024-11-06 12:40:37.004070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.512 BaseBdev2 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 BaseBdev3_malloc 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 true 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 [2024-11-06 12:40:37.071061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:48.512 [2024-11-06 12:40:37.071133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.512 [2024-11-06 12:40:37.071165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.512 [2024-11-06 12:40:37.071187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.512 [2024-11-06 12:40:37.074187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.512 [2024-11-06 12:40:37.074256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:48.512 BaseBdev3 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.512 [2024-11-06 12:40:37.079267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.512 [2024-11-06 12:40:37.081866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.512 [2024-11-06 12:40:37.081988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.512 [2024-11-06 12:40:37.082356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.512 [2024-11-06 12:40:37.082379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.512 [2024-11-06 12:40:37.082734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:48.512 [2024-11-06 12:40:37.082956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.512 [2024-11-06 12:40:37.082980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.512 [2024-11-06 12:40:37.083277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.512 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.513 "name": "raid_bdev1", 00:09:48.513 "uuid": "fecc40aa-45f5-4664-b373-ab510e1260d5", 00:09:48.513 "strip_size_kb": 64, 00:09:48.513 "state": "online", 00:09:48.513 "raid_level": "concat", 00:09:48.513 "superblock": true, 00:09:48.513 "num_base_bdevs": 3, 00:09:48.513 "num_base_bdevs_discovered": 3, 00:09:48.513 "num_base_bdevs_operational": 3, 00:09:48.513 "base_bdevs_list": [ 00:09:48.513 { 00:09:48.513 "name": "BaseBdev1", 00:09:48.513 "uuid": "f5ef5d7f-58be-5097-be47-4a54d24afa2e", 00:09:48.513 "is_configured": true, 00:09:48.513 "data_offset": 2048, 00:09:48.513 "data_size": 63488 00:09:48.513 }, 00:09:48.513 { 00:09:48.513 "name": "BaseBdev2", 00:09:48.513 "uuid": "79ca0387-8d93-51e9-bae8-7ddd638bc530", 00:09:48.513 "is_configured": true, 00:09:48.513 "data_offset": 2048, 00:09:48.513 "data_size": 63488 00:09:48.513 }, 00:09:48.513 { 00:09:48.513 "name": "BaseBdev3", 00:09:48.513 "uuid": "01820963-c980-5020-a3d6-427eb5a833e1", 00:09:48.513 "is_configured": true, 00:09:48.513 "data_offset": 2048, 00:09:48.513 "data_size": 63488 00:09:48.513 } 00:09:48.513 ] 00:09:48.513 }' 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.513 12:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.081 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.081 12:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.340 [2024-11-06 12:40:37.741009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.278 "name": "raid_bdev1", 00:09:50.278 "uuid": "fecc40aa-45f5-4664-b373-ab510e1260d5", 00:09:50.278 "strip_size_kb": 64, 00:09:50.278 "state": "online", 00:09:50.278 "raid_level": "concat", 00:09:50.278 "superblock": true, 00:09:50.278 "num_base_bdevs": 3, 00:09:50.278 "num_base_bdevs_discovered": 3, 00:09:50.278 "num_base_bdevs_operational": 3, 00:09:50.278 "base_bdevs_list": [ 00:09:50.278 { 00:09:50.278 "name": "BaseBdev1", 00:09:50.278 "uuid": "f5ef5d7f-58be-5097-be47-4a54d24afa2e", 00:09:50.278 "is_configured": true, 00:09:50.278 "data_offset": 2048, 00:09:50.278 "data_size": 63488 00:09:50.278 }, 00:09:50.278 { 00:09:50.278 "name": "BaseBdev2", 00:09:50.278 "uuid": "79ca0387-8d93-51e9-bae8-7ddd638bc530", 00:09:50.278 "is_configured": true, 00:09:50.278 "data_offset": 2048, 00:09:50.278 "data_size": 63488 00:09:50.278 }, 00:09:50.278 { 00:09:50.278 "name": "BaseBdev3", 00:09:50.278 "uuid": "01820963-c980-5020-a3d6-427eb5a833e1", 00:09:50.278 "is_configured": true, 00:09:50.278 "data_offset": 2048, 00:09:50.278 "data_size": 63488 00:09:50.278 } 00:09:50.278 ] 00:09:50.278 }' 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.278 12:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.537 12:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.537 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.538 [2024-11-06 12:40:39.155580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.538 [2024-11-06 12:40:39.155626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.538 [2024-11-06 12:40:39.158934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.538 [2024-11-06 12:40:39.159002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.538 [2024-11-06 12:40:39.159056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.538 [2024-11-06 12:40:39.159070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.538 { 00:09:50.538 "results": [ 00:09:50.538 { 00:09:50.538 "job": "raid_bdev1", 00:09:50.538 "core_mask": "0x1", 00:09:50.538 "workload": "randrw", 00:09:50.538 "percentage": 50, 00:09:50.538 "status": "finished", 00:09:50.538 "queue_depth": 1, 00:09:50.538 "io_size": 131072, 00:09:50.538 "runtime": 1.411869, 00:09:50.538 "iops": 8951.255392674533, 00:09:50.538 "mibps": 1118.9069240843166, 00:09:50.538 "io_failed": 1, 00:09:50.538 "io_timeout": 0, 00:09:50.538 "avg_latency_us": 156.11817390616346, 00:09:50.538 "min_latency_us": 43.28727272727273, 00:09:50.538 "max_latency_us": 1846.9236363636364 00:09:50.538 } 00:09:50.538 ], 00:09:50.538 "core_count": 1 00:09:50.538 } 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67317 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67317 ']' 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67317 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.538 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67317 00:09:50.796 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.796 killing process with pid 67317 00:09:50.796 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.796 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67317' 00:09:50.796 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67317 00:09:50.796 [2024-11-06 12:40:39.195876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.796 12:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67317 00:09:50.796 [2024-11-06 12:40:39.432454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qlK21zbFn6 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:52.172 00:09:52.172 real 0m4.758s 00:09:52.172 user 0m5.868s 00:09:52.172 sys 0m0.601s 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.172 ************************************ 00:09:52.172 END TEST raid_write_error_test 00:09:52.172 ************************************ 00:09:52.172 12:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.172 12:40:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.172 12:40:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:52.172 12:40:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:52.172 12:40:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.172 12:40:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.172 ************************************ 00:09:52.172 START TEST raid_state_function_test 00:09:52.173 ************************************ 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67457 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.173 Process raid pid: 67457 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67457' 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67457 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67457 ']' 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:52.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:52.173 12:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.173 [2024-11-06 12:40:40.751225] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:09:52.173 [2024-11-06 12:40:40.751418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.432 [2024-11-06 12:40:40.938432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.432 [2024-11-06 12:40:41.073485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.691 [2024-11-06 12:40:41.282152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.691 [2024-11-06 12:40:41.282225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.309 [2024-11-06 12:40:41.759109] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.309 [2024-11-06 12:40:41.759258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.309 [2024-11-06 12:40:41.759276] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.309 [2024-11-06 12:40:41.759293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.309 [2024-11-06 12:40:41.759304] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.309 [2024-11-06 12:40:41.759318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.309 "name": "Existed_Raid", 00:09:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.309 "strip_size_kb": 0, 00:09:53.309 "state": "configuring", 00:09:53.309 "raid_level": "raid1", 00:09:53.309 "superblock": false, 00:09:53.309 "num_base_bdevs": 3, 00:09:53.309 "num_base_bdevs_discovered": 0, 00:09:53.309 "num_base_bdevs_operational": 3, 00:09:53.309 "base_bdevs_list": [ 00:09:53.309 { 00:09:53.309 "name": "BaseBdev1", 00:09:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.309 "is_configured": false, 00:09:53.309 "data_offset": 0, 00:09:53.309 "data_size": 0 00:09:53.309 }, 00:09:53.309 { 00:09:53.309 "name": "BaseBdev2", 00:09:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.309 "is_configured": false, 00:09:53.309 "data_offset": 0, 00:09:53.309 "data_size": 0 00:09:53.309 }, 00:09:53.309 { 00:09:53.309 "name": "BaseBdev3", 00:09:53.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.309 "is_configured": false, 00:09:53.309 "data_offset": 0, 00:09:53.309 "data_size": 0 00:09:53.309 } 00:09:53.309 ] 00:09:53.309 }' 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.309 12:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 [2024-11-06 12:40:42.291228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.877 [2024-11-06 12:40:42.291296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 [2024-11-06 12:40:42.299159] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.877 [2024-11-06 12:40:42.299228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.877 [2024-11-06 12:40:42.299244] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.877 [2024-11-06 12:40:42.299260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.877 [2024-11-06 12:40:42.299270] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.877 [2024-11-06 12:40:42.299284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 [2024-11-06 12:40:42.343773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.877 BaseBdev1 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 [ 00:09:53.877 { 00:09:53.877 "name": "BaseBdev1", 00:09:53.877 "aliases": [ 00:09:53.877 "90f75fe9-33f5-4b39-9f08-8534eb41dc16" 00:09:53.877 ], 00:09:53.877 "product_name": "Malloc disk", 00:09:53.877 "block_size": 512, 00:09:53.877 "num_blocks": 65536, 00:09:53.877 "uuid": "90f75fe9-33f5-4b39-9f08-8534eb41dc16", 00:09:53.877 "assigned_rate_limits": { 00:09:53.877 "rw_ios_per_sec": 0, 00:09:53.877 "rw_mbytes_per_sec": 0, 00:09:53.877 "r_mbytes_per_sec": 0, 00:09:53.877 "w_mbytes_per_sec": 0 00:09:53.877 }, 00:09:53.877 "claimed": true, 00:09:53.877 "claim_type": "exclusive_write", 00:09:53.877 "zoned": false, 00:09:53.877 "supported_io_types": { 00:09:53.877 "read": true, 00:09:53.877 "write": true, 00:09:53.877 "unmap": true, 00:09:53.877 "flush": true, 00:09:53.877 "reset": true, 00:09:53.877 "nvme_admin": false, 00:09:53.877 "nvme_io": false, 00:09:53.877 "nvme_io_md": false, 00:09:53.877 "write_zeroes": true, 00:09:53.877 "zcopy": true, 00:09:53.877 "get_zone_info": false, 00:09:53.877 "zone_management": false, 00:09:53.877 "zone_append": false, 00:09:53.877 "compare": false, 00:09:53.877 "compare_and_write": false, 00:09:53.877 "abort": true, 00:09:53.877 "seek_hole": false, 00:09:53.877 "seek_data": false, 00:09:53.877 "copy": true, 00:09:53.877 "nvme_iov_md": false 00:09:53.877 }, 00:09:53.877 "memory_domains": [ 00:09:53.877 { 00:09:53.877 "dma_device_id": "system", 00:09:53.877 "dma_device_type": 1 00:09:53.877 }, 00:09:53.877 { 00:09:53.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.877 "dma_device_type": 2 00:09:53.877 } 00:09:53.877 ], 00:09:53.877 "driver_specific": {} 00:09:53.877 } 00:09:53.877 ] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.877 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.877 "name": "Existed_Raid", 00:09:53.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.877 "strip_size_kb": 0, 00:09:53.877 "state": "configuring", 00:09:53.877 "raid_level": "raid1", 00:09:53.877 "superblock": false, 00:09:53.877 "num_base_bdevs": 3, 00:09:53.877 "num_base_bdevs_discovered": 1, 00:09:53.877 "num_base_bdevs_operational": 3, 00:09:53.877 "base_bdevs_list": [ 00:09:53.877 { 00:09:53.877 "name": "BaseBdev1", 00:09:53.877 "uuid": "90f75fe9-33f5-4b39-9f08-8534eb41dc16", 00:09:53.877 "is_configured": true, 00:09:53.877 "data_offset": 0, 00:09:53.877 "data_size": 65536 00:09:53.877 }, 00:09:53.877 { 00:09:53.877 "name": "BaseBdev2", 00:09:53.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.878 "is_configured": false, 00:09:53.878 "data_offset": 0, 00:09:53.878 "data_size": 0 00:09:53.878 }, 00:09:53.878 { 00:09:53.878 "name": "BaseBdev3", 00:09:53.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.878 "is_configured": false, 00:09:53.878 "data_offset": 0, 00:09:53.878 "data_size": 0 00:09:53.878 } 00:09:53.878 ] 00:09:53.878 }' 00:09:53.878 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.878 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.450 [2024-11-06 12:40:42.879996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.450 [2024-11-06 12:40:42.880078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.450 [2024-11-06 12:40:42.888010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.450 [2024-11-06 12:40:42.890419] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.450 [2024-11-06 12:40:42.890471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.450 [2024-11-06 12:40:42.890487] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.450 [2024-11-06 12:40:42.890504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.450 "name": "Existed_Raid", 00:09:54.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.450 "strip_size_kb": 0, 00:09:54.450 "state": "configuring", 00:09:54.450 "raid_level": "raid1", 00:09:54.450 "superblock": false, 00:09:54.450 "num_base_bdevs": 3, 00:09:54.450 "num_base_bdevs_discovered": 1, 00:09:54.450 "num_base_bdevs_operational": 3, 00:09:54.450 "base_bdevs_list": [ 00:09:54.450 { 00:09:54.450 "name": "BaseBdev1", 00:09:54.450 "uuid": "90f75fe9-33f5-4b39-9f08-8534eb41dc16", 00:09:54.450 "is_configured": true, 00:09:54.450 "data_offset": 0, 00:09:54.450 "data_size": 65536 00:09:54.450 }, 00:09:54.450 { 00:09:54.450 "name": "BaseBdev2", 00:09:54.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.450 "is_configured": false, 00:09:54.450 "data_offset": 0, 00:09:54.450 "data_size": 0 00:09:54.450 }, 00:09:54.450 { 00:09:54.450 "name": "BaseBdev3", 00:09:54.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.450 "is_configured": false, 00:09:54.450 "data_offset": 0, 00:09:54.450 "data_size": 0 00:09:54.450 } 00:09:54.450 ] 00:09:54.450 }' 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.450 12:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.017 [2024-11-06 12:40:43.422851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.017 BaseBdev2 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.017 [ 00:09:55.017 { 00:09:55.017 "name": "BaseBdev2", 00:09:55.017 "aliases": [ 00:09:55.017 "c1ef4e80-e27a-49fa-8283-111def308bb1" 00:09:55.017 ], 00:09:55.017 "product_name": "Malloc disk", 00:09:55.017 "block_size": 512, 00:09:55.017 "num_blocks": 65536, 00:09:55.017 "uuid": "c1ef4e80-e27a-49fa-8283-111def308bb1", 00:09:55.017 "assigned_rate_limits": { 00:09:55.017 "rw_ios_per_sec": 0, 00:09:55.017 "rw_mbytes_per_sec": 0, 00:09:55.017 "r_mbytes_per_sec": 0, 00:09:55.017 "w_mbytes_per_sec": 0 00:09:55.017 }, 00:09:55.017 "claimed": true, 00:09:55.017 "claim_type": "exclusive_write", 00:09:55.017 "zoned": false, 00:09:55.017 "supported_io_types": { 00:09:55.017 "read": true, 00:09:55.017 "write": true, 00:09:55.017 "unmap": true, 00:09:55.017 "flush": true, 00:09:55.017 "reset": true, 00:09:55.017 "nvme_admin": false, 00:09:55.017 "nvme_io": false, 00:09:55.017 "nvme_io_md": false, 00:09:55.017 "write_zeroes": true, 00:09:55.017 "zcopy": true, 00:09:55.017 "get_zone_info": false, 00:09:55.017 "zone_management": false, 00:09:55.017 "zone_append": false, 00:09:55.017 "compare": false, 00:09:55.017 "compare_and_write": false, 00:09:55.017 "abort": true, 00:09:55.017 "seek_hole": false, 00:09:55.017 "seek_data": false, 00:09:55.017 "copy": true, 00:09:55.017 "nvme_iov_md": false 00:09:55.017 }, 00:09:55.017 "memory_domains": [ 00:09:55.017 { 00:09:55.017 "dma_device_id": "system", 00:09:55.017 "dma_device_type": 1 00:09:55.017 }, 00:09:55.017 { 00:09:55.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.017 "dma_device_type": 2 00:09:55.017 } 00:09:55.017 ], 00:09:55.017 "driver_specific": {} 00:09:55.017 } 00:09:55.017 ] 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.017 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.018 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.018 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.018 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.018 "name": "Existed_Raid", 00:09:55.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.018 "strip_size_kb": 0, 00:09:55.018 "state": "configuring", 00:09:55.018 "raid_level": "raid1", 00:09:55.018 "superblock": false, 00:09:55.018 "num_base_bdevs": 3, 00:09:55.018 "num_base_bdevs_discovered": 2, 00:09:55.018 "num_base_bdevs_operational": 3, 00:09:55.018 "base_bdevs_list": [ 00:09:55.018 { 00:09:55.018 "name": "BaseBdev1", 00:09:55.018 "uuid": "90f75fe9-33f5-4b39-9f08-8534eb41dc16", 00:09:55.018 "is_configured": true, 00:09:55.018 "data_offset": 0, 00:09:55.018 "data_size": 65536 00:09:55.018 }, 00:09:55.018 { 00:09:55.018 "name": "BaseBdev2", 00:09:55.018 "uuid": "c1ef4e80-e27a-49fa-8283-111def308bb1", 00:09:55.018 "is_configured": true, 00:09:55.018 "data_offset": 0, 00:09:55.018 "data_size": 65536 00:09:55.018 }, 00:09:55.018 { 00:09:55.018 "name": "BaseBdev3", 00:09:55.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.018 "is_configured": false, 00:09:55.018 "data_offset": 0, 00:09:55.018 "data_size": 0 00:09:55.018 } 00:09:55.018 ] 00:09:55.018 }' 00:09:55.018 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.018 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.585 [2024-11-06 12:40:43.988286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.585 [2024-11-06 12:40:43.988353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.585 [2024-11-06 12:40:43.988372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:55.585 [2024-11-06 12:40:43.988732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:55.585 [2024-11-06 12:40:43.988939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.585 [2024-11-06 12:40:43.988955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:55.585 [2024-11-06 12:40:43.989316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.585 BaseBdev3 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.585 12:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.585 [ 00:09:55.585 { 00:09:55.585 "name": "BaseBdev3", 00:09:55.585 "aliases": [ 00:09:55.585 "44af7c4c-2c72-442e-9706-6cf17ab14e99" 00:09:55.585 ], 00:09:55.585 "product_name": "Malloc disk", 00:09:55.585 "block_size": 512, 00:09:55.585 "num_blocks": 65536, 00:09:55.585 "uuid": "44af7c4c-2c72-442e-9706-6cf17ab14e99", 00:09:55.585 "assigned_rate_limits": { 00:09:55.585 "rw_ios_per_sec": 0, 00:09:55.585 "rw_mbytes_per_sec": 0, 00:09:55.585 "r_mbytes_per_sec": 0, 00:09:55.585 "w_mbytes_per_sec": 0 00:09:55.585 }, 00:09:55.585 "claimed": true, 00:09:55.585 "claim_type": "exclusive_write", 00:09:55.585 "zoned": false, 00:09:55.585 "supported_io_types": { 00:09:55.585 "read": true, 00:09:55.585 "write": true, 00:09:55.585 "unmap": true, 00:09:55.585 "flush": true, 00:09:55.585 "reset": true, 00:09:55.585 "nvme_admin": false, 00:09:55.585 "nvme_io": false, 00:09:55.585 "nvme_io_md": false, 00:09:55.585 "write_zeroes": true, 00:09:55.585 "zcopy": true, 00:09:55.585 "get_zone_info": false, 00:09:55.585 "zone_management": false, 00:09:55.585 "zone_append": false, 00:09:55.585 "compare": false, 00:09:55.585 "compare_and_write": false, 00:09:55.585 "abort": true, 00:09:55.585 "seek_hole": false, 00:09:55.585 "seek_data": false, 00:09:55.585 "copy": true, 00:09:55.585 "nvme_iov_md": false 00:09:55.585 }, 00:09:55.585 "memory_domains": [ 00:09:55.585 { 00:09:55.585 "dma_device_id": "system", 00:09:55.585 "dma_device_type": 1 00:09:55.585 }, 00:09:55.585 { 00:09:55.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.585 "dma_device_type": 2 00:09:55.585 } 00:09:55.585 ], 00:09:55.586 "driver_specific": {} 00:09:55.586 } 00:09:55.586 ] 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.586 "name": "Existed_Raid", 00:09:55.586 "uuid": "a62b9329-5a6e-4a34-b6fe-2cfa6e29e26c", 00:09:55.586 "strip_size_kb": 0, 00:09:55.586 "state": "online", 00:09:55.586 "raid_level": "raid1", 00:09:55.586 "superblock": false, 00:09:55.586 "num_base_bdevs": 3, 00:09:55.586 "num_base_bdevs_discovered": 3, 00:09:55.586 "num_base_bdevs_operational": 3, 00:09:55.586 "base_bdevs_list": [ 00:09:55.586 { 00:09:55.586 "name": "BaseBdev1", 00:09:55.586 "uuid": "90f75fe9-33f5-4b39-9f08-8534eb41dc16", 00:09:55.586 "is_configured": true, 00:09:55.586 "data_offset": 0, 00:09:55.586 "data_size": 65536 00:09:55.586 }, 00:09:55.586 { 00:09:55.586 "name": "BaseBdev2", 00:09:55.586 "uuid": "c1ef4e80-e27a-49fa-8283-111def308bb1", 00:09:55.586 "is_configured": true, 00:09:55.586 "data_offset": 0, 00:09:55.586 "data_size": 65536 00:09:55.586 }, 00:09:55.586 { 00:09:55.586 "name": "BaseBdev3", 00:09:55.586 "uuid": "44af7c4c-2c72-442e-9706-6cf17ab14e99", 00:09:55.586 "is_configured": true, 00:09:55.586 "data_offset": 0, 00:09:55.586 "data_size": 65536 00:09:55.586 } 00:09:55.586 ] 00:09:55.586 }' 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.586 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.155 [2024-11-06 12:40:44.552912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.155 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.155 "name": "Existed_Raid", 00:09:56.155 "aliases": [ 00:09:56.156 "a62b9329-5a6e-4a34-b6fe-2cfa6e29e26c" 00:09:56.156 ], 00:09:56.156 "product_name": "Raid Volume", 00:09:56.156 "block_size": 512, 00:09:56.156 "num_blocks": 65536, 00:09:56.156 "uuid": "a62b9329-5a6e-4a34-b6fe-2cfa6e29e26c", 00:09:56.156 "assigned_rate_limits": { 00:09:56.156 "rw_ios_per_sec": 0, 00:09:56.156 "rw_mbytes_per_sec": 0, 00:09:56.156 "r_mbytes_per_sec": 0, 00:09:56.156 "w_mbytes_per_sec": 0 00:09:56.156 }, 00:09:56.156 "claimed": false, 00:09:56.156 "zoned": false, 00:09:56.156 "supported_io_types": { 00:09:56.156 "read": true, 00:09:56.156 "write": true, 00:09:56.156 "unmap": false, 00:09:56.156 "flush": false, 00:09:56.156 "reset": true, 00:09:56.156 "nvme_admin": false, 00:09:56.156 "nvme_io": false, 00:09:56.156 "nvme_io_md": false, 00:09:56.156 "write_zeroes": true, 00:09:56.156 "zcopy": false, 00:09:56.156 "get_zone_info": false, 00:09:56.156 "zone_management": false, 00:09:56.156 "zone_append": false, 00:09:56.156 "compare": false, 00:09:56.156 "compare_and_write": false, 00:09:56.156 "abort": false, 00:09:56.156 "seek_hole": false, 00:09:56.156 "seek_data": false, 00:09:56.156 "copy": false, 00:09:56.156 "nvme_iov_md": false 00:09:56.156 }, 00:09:56.156 "memory_domains": [ 00:09:56.156 { 00:09:56.156 "dma_device_id": "system", 00:09:56.156 "dma_device_type": 1 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.156 "dma_device_type": 2 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "dma_device_id": "system", 00:09:56.156 "dma_device_type": 1 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.156 "dma_device_type": 2 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "dma_device_id": "system", 00:09:56.156 "dma_device_type": 1 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.156 "dma_device_type": 2 00:09:56.156 } 00:09:56.156 ], 00:09:56.156 "driver_specific": { 00:09:56.156 "raid": { 00:09:56.156 "uuid": "a62b9329-5a6e-4a34-b6fe-2cfa6e29e26c", 00:09:56.156 "strip_size_kb": 0, 00:09:56.156 "state": "online", 00:09:56.156 "raid_level": "raid1", 00:09:56.156 "superblock": false, 00:09:56.156 "num_base_bdevs": 3, 00:09:56.156 "num_base_bdevs_discovered": 3, 00:09:56.156 "num_base_bdevs_operational": 3, 00:09:56.156 "base_bdevs_list": [ 00:09:56.156 { 00:09:56.156 "name": "BaseBdev1", 00:09:56.156 "uuid": "90f75fe9-33f5-4b39-9f08-8534eb41dc16", 00:09:56.156 "is_configured": true, 00:09:56.156 "data_offset": 0, 00:09:56.156 "data_size": 65536 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "name": "BaseBdev2", 00:09:56.156 "uuid": "c1ef4e80-e27a-49fa-8283-111def308bb1", 00:09:56.156 "is_configured": true, 00:09:56.156 "data_offset": 0, 00:09:56.156 "data_size": 65536 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "name": "BaseBdev3", 00:09:56.156 "uuid": "44af7c4c-2c72-442e-9706-6cf17ab14e99", 00:09:56.156 "is_configured": true, 00:09:56.156 "data_offset": 0, 00:09:56.156 "data_size": 65536 00:09:56.156 } 00:09:56.156 ] 00:09:56.156 } 00:09:56.156 } 00:09:56.156 }' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.156 BaseBdev2 00:09:56.156 BaseBdev3' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.156 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.416 [2024-11-06 12:40:44.856669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.416 "name": "Existed_Raid", 00:09:56.416 "uuid": "a62b9329-5a6e-4a34-b6fe-2cfa6e29e26c", 00:09:56.416 "strip_size_kb": 0, 00:09:56.416 "state": "online", 00:09:56.416 "raid_level": "raid1", 00:09:56.416 "superblock": false, 00:09:56.416 "num_base_bdevs": 3, 00:09:56.416 "num_base_bdevs_discovered": 2, 00:09:56.416 "num_base_bdevs_operational": 2, 00:09:56.416 "base_bdevs_list": [ 00:09:56.416 { 00:09:56.416 "name": null, 00:09:56.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.416 "is_configured": false, 00:09:56.416 "data_offset": 0, 00:09:56.416 "data_size": 65536 00:09:56.416 }, 00:09:56.416 { 00:09:56.416 "name": "BaseBdev2", 00:09:56.416 "uuid": "c1ef4e80-e27a-49fa-8283-111def308bb1", 00:09:56.416 "is_configured": true, 00:09:56.416 "data_offset": 0, 00:09:56.416 "data_size": 65536 00:09:56.416 }, 00:09:56.416 { 00:09:56.416 "name": "BaseBdev3", 00:09:56.416 "uuid": "44af7c4c-2c72-442e-9706-6cf17ab14e99", 00:09:56.416 "is_configured": true, 00:09:56.416 "data_offset": 0, 00:09:56.416 "data_size": 65536 00:09:56.416 } 00:09:56.416 ] 00:09:56.416 }' 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.416 12:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.989 [2024-11-06 12:40:45.499759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.989 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 [2024-11-06 12:40:45.652774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.248 [2024-11-06 12:40:45.652921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.248 [2024-11-06 12:40:45.749289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.248 [2024-11-06 12:40:45.749379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.248 [2024-11-06 12:40:45.749402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 BaseBdev2 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.248 [ 00:09:57.248 { 00:09:57.248 "name": "BaseBdev2", 00:09:57.248 "aliases": [ 00:09:57.248 "4894516e-e926-4e95-aa93-65c5049b0e2b" 00:09:57.248 ], 00:09:57.248 "product_name": "Malloc disk", 00:09:57.248 "block_size": 512, 00:09:57.248 "num_blocks": 65536, 00:09:57.248 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:09:57.248 "assigned_rate_limits": { 00:09:57.248 "rw_ios_per_sec": 0, 00:09:57.248 "rw_mbytes_per_sec": 0, 00:09:57.248 "r_mbytes_per_sec": 0, 00:09:57.248 "w_mbytes_per_sec": 0 00:09:57.248 }, 00:09:57.248 "claimed": false, 00:09:57.248 "zoned": false, 00:09:57.248 "supported_io_types": { 00:09:57.248 "read": true, 00:09:57.248 "write": true, 00:09:57.248 "unmap": true, 00:09:57.248 "flush": true, 00:09:57.248 "reset": true, 00:09:57.248 "nvme_admin": false, 00:09:57.248 "nvme_io": false, 00:09:57.248 "nvme_io_md": false, 00:09:57.248 "write_zeroes": true, 00:09:57.248 "zcopy": true, 00:09:57.248 "get_zone_info": false, 00:09:57.248 "zone_management": false, 00:09:57.248 "zone_append": false, 00:09:57.248 "compare": false, 00:09:57.248 "compare_and_write": false, 00:09:57.248 "abort": true, 00:09:57.248 "seek_hole": false, 00:09:57.248 "seek_data": false, 00:09:57.248 "copy": true, 00:09:57.248 "nvme_iov_md": false 00:09:57.248 }, 00:09:57.248 "memory_domains": [ 00:09:57.248 { 00:09:57.248 "dma_device_id": "system", 00:09:57.248 "dma_device_type": 1 00:09:57.248 }, 00:09:57.248 { 00:09:57.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.248 "dma_device_type": 2 00:09:57.248 } 00:09:57.248 ], 00:09:57.248 "driver_specific": {} 00:09:57.248 } 00:09:57.248 ] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.248 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.507 BaseBdev3 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.507 [ 00:09:57.507 { 00:09:57.507 "name": "BaseBdev3", 00:09:57.507 "aliases": [ 00:09:57.507 "9cd1ce2a-668f-4042-a4e6-a95516dafaea" 00:09:57.507 ], 00:09:57.507 "product_name": "Malloc disk", 00:09:57.507 "block_size": 512, 00:09:57.507 "num_blocks": 65536, 00:09:57.507 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:09:57.507 "assigned_rate_limits": { 00:09:57.507 "rw_ios_per_sec": 0, 00:09:57.507 "rw_mbytes_per_sec": 0, 00:09:57.507 "r_mbytes_per_sec": 0, 00:09:57.507 "w_mbytes_per_sec": 0 00:09:57.507 }, 00:09:57.507 "claimed": false, 00:09:57.507 "zoned": false, 00:09:57.507 "supported_io_types": { 00:09:57.507 "read": true, 00:09:57.507 "write": true, 00:09:57.507 "unmap": true, 00:09:57.507 "flush": true, 00:09:57.507 "reset": true, 00:09:57.507 "nvme_admin": false, 00:09:57.507 "nvme_io": false, 00:09:57.507 "nvme_io_md": false, 00:09:57.507 "write_zeroes": true, 00:09:57.507 "zcopy": true, 00:09:57.507 "get_zone_info": false, 00:09:57.507 "zone_management": false, 00:09:57.507 "zone_append": false, 00:09:57.507 "compare": false, 00:09:57.507 "compare_and_write": false, 00:09:57.507 "abort": true, 00:09:57.507 "seek_hole": false, 00:09:57.507 "seek_data": false, 00:09:57.507 "copy": true, 00:09:57.507 "nvme_iov_md": false 00:09:57.507 }, 00:09:57.507 "memory_domains": [ 00:09:57.507 { 00:09:57.507 "dma_device_id": "system", 00:09:57.507 "dma_device_type": 1 00:09:57.507 }, 00:09:57.507 { 00:09:57.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.507 "dma_device_type": 2 00:09:57.507 } 00:09:57.507 ], 00:09:57.507 "driver_specific": {} 00:09:57.507 } 00:09:57.507 ] 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.507 [2024-11-06 12:40:45.949297] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.507 [2024-11-06 12:40:45.949365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.507 [2024-11-06 12:40:45.949401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.507 [2024-11-06 12:40:45.952122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.507 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.508 12:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.508 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.508 "name": "Existed_Raid", 00:09:57.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.508 "strip_size_kb": 0, 00:09:57.508 "state": "configuring", 00:09:57.508 "raid_level": "raid1", 00:09:57.508 "superblock": false, 00:09:57.508 "num_base_bdevs": 3, 00:09:57.508 "num_base_bdevs_discovered": 2, 00:09:57.508 "num_base_bdevs_operational": 3, 00:09:57.508 "base_bdevs_list": [ 00:09:57.508 { 00:09:57.508 "name": "BaseBdev1", 00:09:57.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.508 "is_configured": false, 00:09:57.508 "data_offset": 0, 00:09:57.508 "data_size": 0 00:09:57.508 }, 00:09:57.508 { 00:09:57.508 "name": "BaseBdev2", 00:09:57.508 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:09:57.508 "is_configured": true, 00:09:57.508 "data_offset": 0, 00:09:57.508 "data_size": 65536 00:09:57.508 }, 00:09:57.508 { 00:09:57.508 "name": "BaseBdev3", 00:09:57.508 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:09:57.508 "is_configured": true, 00:09:57.508 "data_offset": 0, 00:09:57.508 "data_size": 65536 00:09:57.508 } 00:09:57.508 ] 00:09:57.508 }' 00:09:57.508 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.508 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.075 [2024-11-06 12:40:46.481455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.075 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.075 "name": "Existed_Raid", 00:09:58.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.075 "strip_size_kb": 0, 00:09:58.075 "state": "configuring", 00:09:58.075 "raid_level": "raid1", 00:09:58.075 "superblock": false, 00:09:58.075 "num_base_bdevs": 3, 00:09:58.075 "num_base_bdevs_discovered": 1, 00:09:58.075 "num_base_bdevs_operational": 3, 00:09:58.075 "base_bdevs_list": [ 00:09:58.075 { 00:09:58.075 "name": "BaseBdev1", 00:09:58.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.075 "is_configured": false, 00:09:58.075 "data_offset": 0, 00:09:58.075 "data_size": 0 00:09:58.075 }, 00:09:58.075 { 00:09:58.075 "name": null, 00:09:58.075 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:09:58.075 "is_configured": false, 00:09:58.076 "data_offset": 0, 00:09:58.076 "data_size": 65536 00:09:58.076 }, 00:09:58.076 { 00:09:58.076 "name": "BaseBdev3", 00:09:58.076 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:09:58.076 "is_configured": true, 00:09:58.076 "data_offset": 0, 00:09:58.076 "data_size": 65536 00:09:58.076 } 00:09:58.076 ] 00:09:58.076 }' 00:09:58.076 12:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.076 12:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.643 [2024-11-06 12:40:47.123828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.643 BaseBdev1 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.643 [ 00:09:58.643 { 00:09:58.643 "name": "BaseBdev1", 00:09:58.643 "aliases": [ 00:09:58.643 "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a" 00:09:58.643 ], 00:09:58.643 "product_name": "Malloc disk", 00:09:58.643 "block_size": 512, 00:09:58.643 "num_blocks": 65536, 00:09:58.643 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:09:58.643 "assigned_rate_limits": { 00:09:58.643 "rw_ios_per_sec": 0, 00:09:58.643 "rw_mbytes_per_sec": 0, 00:09:58.643 "r_mbytes_per_sec": 0, 00:09:58.643 "w_mbytes_per_sec": 0 00:09:58.643 }, 00:09:58.643 "claimed": true, 00:09:58.643 "claim_type": "exclusive_write", 00:09:58.643 "zoned": false, 00:09:58.643 "supported_io_types": { 00:09:58.643 "read": true, 00:09:58.643 "write": true, 00:09:58.643 "unmap": true, 00:09:58.643 "flush": true, 00:09:58.643 "reset": true, 00:09:58.643 "nvme_admin": false, 00:09:58.643 "nvme_io": false, 00:09:58.643 "nvme_io_md": false, 00:09:58.643 "write_zeroes": true, 00:09:58.643 "zcopy": true, 00:09:58.643 "get_zone_info": false, 00:09:58.643 "zone_management": false, 00:09:58.643 "zone_append": false, 00:09:58.643 "compare": false, 00:09:58.643 "compare_and_write": false, 00:09:58.643 "abort": true, 00:09:58.643 "seek_hole": false, 00:09:58.643 "seek_data": false, 00:09:58.643 "copy": true, 00:09:58.643 "nvme_iov_md": false 00:09:58.643 }, 00:09:58.643 "memory_domains": [ 00:09:58.643 { 00:09:58.643 "dma_device_id": "system", 00:09:58.643 "dma_device_type": 1 00:09:58.643 }, 00:09:58.643 { 00:09:58.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.643 "dma_device_type": 2 00:09:58.643 } 00:09:58.643 ], 00:09:58.643 "driver_specific": {} 00:09:58.643 } 00:09:58.643 ] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.643 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.643 "name": "Existed_Raid", 00:09:58.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.643 "strip_size_kb": 0, 00:09:58.643 "state": "configuring", 00:09:58.643 "raid_level": "raid1", 00:09:58.643 "superblock": false, 00:09:58.643 "num_base_bdevs": 3, 00:09:58.643 "num_base_bdevs_discovered": 2, 00:09:58.643 "num_base_bdevs_operational": 3, 00:09:58.643 "base_bdevs_list": [ 00:09:58.643 { 00:09:58.643 "name": "BaseBdev1", 00:09:58.643 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:09:58.643 "is_configured": true, 00:09:58.644 "data_offset": 0, 00:09:58.644 "data_size": 65536 00:09:58.644 }, 00:09:58.644 { 00:09:58.644 "name": null, 00:09:58.644 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:09:58.644 "is_configured": false, 00:09:58.644 "data_offset": 0, 00:09:58.644 "data_size": 65536 00:09:58.644 }, 00:09:58.644 { 00:09:58.644 "name": "BaseBdev3", 00:09:58.644 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:09:58.644 "is_configured": true, 00:09:58.644 "data_offset": 0, 00:09:58.644 "data_size": 65536 00:09:58.644 } 00:09:58.644 ] 00:09:58.644 }' 00:09:58.644 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.644 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.212 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.213 [2024-11-06 12:40:47.704064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.213 "name": "Existed_Raid", 00:09:59.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.213 "strip_size_kb": 0, 00:09:59.213 "state": "configuring", 00:09:59.213 "raid_level": "raid1", 00:09:59.213 "superblock": false, 00:09:59.213 "num_base_bdevs": 3, 00:09:59.213 "num_base_bdevs_discovered": 1, 00:09:59.213 "num_base_bdevs_operational": 3, 00:09:59.213 "base_bdevs_list": [ 00:09:59.213 { 00:09:59.213 "name": "BaseBdev1", 00:09:59.213 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:09:59.213 "is_configured": true, 00:09:59.213 "data_offset": 0, 00:09:59.213 "data_size": 65536 00:09:59.213 }, 00:09:59.213 { 00:09:59.213 "name": null, 00:09:59.213 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:09:59.213 "is_configured": false, 00:09:59.213 "data_offset": 0, 00:09:59.213 "data_size": 65536 00:09:59.213 }, 00:09:59.213 { 00:09:59.213 "name": null, 00:09:59.213 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:09:59.213 "is_configured": false, 00:09:59.213 "data_offset": 0, 00:09:59.213 "data_size": 65536 00:09:59.213 } 00:09:59.213 ] 00:09:59.213 }' 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.213 12:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.780 [2024-11-06 12:40:48.300375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.780 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.781 "name": "Existed_Raid", 00:09:59.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.781 "strip_size_kb": 0, 00:09:59.781 "state": "configuring", 00:09:59.781 "raid_level": "raid1", 00:09:59.781 "superblock": false, 00:09:59.781 "num_base_bdevs": 3, 00:09:59.781 "num_base_bdevs_discovered": 2, 00:09:59.781 "num_base_bdevs_operational": 3, 00:09:59.781 "base_bdevs_list": [ 00:09:59.781 { 00:09:59.781 "name": "BaseBdev1", 00:09:59.781 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:09:59.781 "is_configured": true, 00:09:59.781 "data_offset": 0, 00:09:59.781 "data_size": 65536 00:09:59.781 }, 00:09:59.781 { 00:09:59.781 "name": null, 00:09:59.781 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:09:59.781 "is_configured": false, 00:09:59.781 "data_offset": 0, 00:09:59.781 "data_size": 65536 00:09:59.781 }, 00:09:59.781 { 00:09:59.781 "name": "BaseBdev3", 00:09:59.781 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:09:59.781 "is_configured": true, 00:09:59.781 "data_offset": 0, 00:09:59.781 "data_size": 65536 00:09:59.781 } 00:09:59.781 ] 00:09:59.781 }' 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.781 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 [2024-11-06 12:40:48.860525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 12:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.607 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.607 "name": "Existed_Raid", 00:10:00.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.607 "strip_size_kb": 0, 00:10:00.607 "state": "configuring", 00:10:00.607 "raid_level": "raid1", 00:10:00.607 "superblock": false, 00:10:00.607 "num_base_bdevs": 3, 00:10:00.607 "num_base_bdevs_discovered": 1, 00:10:00.607 "num_base_bdevs_operational": 3, 00:10:00.607 "base_bdevs_list": [ 00:10:00.607 { 00:10:00.607 "name": null, 00:10:00.607 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:10:00.607 "is_configured": false, 00:10:00.607 "data_offset": 0, 00:10:00.607 "data_size": 65536 00:10:00.607 }, 00:10:00.607 { 00:10:00.607 "name": null, 00:10:00.607 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:10:00.607 "is_configured": false, 00:10:00.607 "data_offset": 0, 00:10:00.607 "data_size": 65536 00:10:00.607 }, 00:10:00.607 { 00:10:00.607 "name": "BaseBdev3", 00:10:00.607 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:10:00.607 "is_configured": true, 00:10:00.607 "data_offset": 0, 00:10:00.607 "data_size": 65536 00:10:00.607 } 00:10:00.607 ] 00:10:00.607 }' 00:10:00.607 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.607 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.174 [2024-11-06 12:40:49.606685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.174 "name": "Existed_Raid", 00:10:01.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.174 "strip_size_kb": 0, 00:10:01.174 "state": "configuring", 00:10:01.174 "raid_level": "raid1", 00:10:01.174 "superblock": false, 00:10:01.174 "num_base_bdevs": 3, 00:10:01.174 "num_base_bdevs_discovered": 2, 00:10:01.174 "num_base_bdevs_operational": 3, 00:10:01.174 "base_bdevs_list": [ 00:10:01.174 { 00:10:01.174 "name": null, 00:10:01.174 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:10:01.174 "is_configured": false, 00:10:01.174 "data_offset": 0, 00:10:01.174 "data_size": 65536 00:10:01.174 }, 00:10:01.174 { 00:10:01.174 "name": "BaseBdev2", 00:10:01.174 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:10:01.174 "is_configured": true, 00:10:01.174 "data_offset": 0, 00:10:01.174 "data_size": 65536 00:10:01.174 }, 00:10:01.174 { 00:10:01.174 "name": "BaseBdev3", 00:10:01.174 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:10:01.174 "is_configured": true, 00:10:01.174 "data_offset": 0, 00:10:01.174 "data_size": 65536 00:10:01.174 } 00:10:01.174 ] 00:10:01.174 }' 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.174 12:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 [2024-11-06 12:40:50.266043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:01.741 [2024-11-06 12:40:50.266163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.741 [2024-11-06 12:40:50.266178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:01.741 [2024-11-06 12:40:50.266579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:01.741 [2024-11-06 12:40:50.266821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.741 [2024-11-06 12:40:50.266855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:01.741 [2024-11-06 12:40:50.267235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.741 NewBaseBdev 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 [ 00:10:01.741 { 00:10:01.741 "name": "NewBaseBdev", 00:10:01.741 "aliases": [ 00:10:01.741 "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a" 00:10:01.741 ], 00:10:01.741 "product_name": "Malloc disk", 00:10:01.741 "block_size": 512, 00:10:01.741 "num_blocks": 65536, 00:10:01.741 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:10:01.741 "assigned_rate_limits": { 00:10:01.741 "rw_ios_per_sec": 0, 00:10:01.741 "rw_mbytes_per_sec": 0, 00:10:01.741 "r_mbytes_per_sec": 0, 00:10:01.741 "w_mbytes_per_sec": 0 00:10:01.741 }, 00:10:01.741 "claimed": true, 00:10:01.741 "claim_type": "exclusive_write", 00:10:01.741 "zoned": false, 00:10:01.741 "supported_io_types": { 00:10:01.741 "read": true, 00:10:01.741 "write": true, 00:10:01.741 "unmap": true, 00:10:01.741 "flush": true, 00:10:01.741 "reset": true, 00:10:01.741 "nvme_admin": false, 00:10:01.741 "nvme_io": false, 00:10:01.741 "nvme_io_md": false, 00:10:01.741 "write_zeroes": true, 00:10:01.741 "zcopy": true, 00:10:01.741 "get_zone_info": false, 00:10:01.741 "zone_management": false, 00:10:01.741 "zone_append": false, 00:10:01.741 "compare": false, 00:10:01.741 "compare_and_write": false, 00:10:01.741 "abort": true, 00:10:01.741 "seek_hole": false, 00:10:01.741 "seek_data": false, 00:10:01.741 "copy": true, 00:10:01.741 "nvme_iov_md": false 00:10:01.741 }, 00:10:01.741 "memory_domains": [ 00:10:01.741 { 00:10:01.741 "dma_device_id": "system", 00:10:01.741 "dma_device_type": 1 00:10:01.741 }, 00:10:01.741 { 00:10:01.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.741 "dma_device_type": 2 00:10:01.741 } 00:10:01.741 ], 00:10:01.741 "driver_specific": {} 00:10:01.741 } 00:10:01.741 ] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.741 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.741 "name": "Existed_Raid", 00:10:01.741 "uuid": "4a870968-e548-47b9-9ea5-18c213f8d6a3", 00:10:01.741 "strip_size_kb": 0, 00:10:01.741 "state": "online", 00:10:01.741 "raid_level": "raid1", 00:10:01.741 "superblock": false, 00:10:01.741 "num_base_bdevs": 3, 00:10:01.741 "num_base_bdevs_discovered": 3, 00:10:01.741 "num_base_bdevs_operational": 3, 00:10:01.741 "base_bdevs_list": [ 00:10:01.741 { 00:10:01.741 "name": "NewBaseBdev", 00:10:01.741 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:10:01.741 "is_configured": true, 00:10:01.741 "data_offset": 0, 00:10:01.742 "data_size": 65536 00:10:01.742 }, 00:10:01.742 { 00:10:01.742 "name": "BaseBdev2", 00:10:01.742 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:10:01.742 "is_configured": true, 00:10:01.742 "data_offset": 0, 00:10:01.742 "data_size": 65536 00:10:01.742 }, 00:10:01.742 { 00:10:01.742 "name": "BaseBdev3", 00:10:01.742 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:10:01.742 "is_configured": true, 00:10:01.742 "data_offset": 0, 00:10:01.742 "data_size": 65536 00:10:01.742 } 00:10:01.742 ] 00:10:01.742 }' 00:10:01.742 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.742 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.309 [2024-11-06 12:40:50.786738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.309 "name": "Existed_Raid", 00:10:02.309 "aliases": [ 00:10:02.309 "4a870968-e548-47b9-9ea5-18c213f8d6a3" 00:10:02.309 ], 00:10:02.309 "product_name": "Raid Volume", 00:10:02.309 "block_size": 512, 00:10:02.309 "num_blocks": 65536, 00:10:02.309 "uuid": "4a870968-e548-47b9-9ea5-18c213f8d6a3", 00:10:02.309 "assigned_rate_limits": { 00:10:02.309 "rw_ios_per_sec": 0, 00:10:02.309 "rw_mbytes_per_sec": 0, 00:10:02.309 "r_mbytes_per_sec": 0, 00:10:02.309 "w_mbytes_per_sec": 0 00:10:02.309 }, 00:10:02.309 "claimed": false, 00:10:02.309 "zoned": false, 00:10:02.309 "supported_io_types": { 00:10:02.309 "read": true, 00:10:02.309 "write": true, 00:10:02.309 "unmap": false, 00:10:02.309 "flush": false, 00:10:02.309 "reset": true, 00:10:02.309 "nvme_admin": false, 00:10:02.309 "nvme_io": false, 00:10:02.309 "nvme_io_md": false, 00:10:02.309 "write_zeroes": true, 00:10:02.309 "zcopy": false, 00:10:02.309 "get_zone_info": false, 00:10:02.309 "zone_management": false, 00:10:02.309 "zone_append": false, 00:10:02.309 "compare": false, 00:10:02.309 "compare_and_write": false, 00:10:02.309 "abort": false, 00:10:02.309 "seek_hole": false, 00:10:02.309 "seek_data": false, 00:10:02.309 "copy": false, 00:10:02.309 "nvme_iov_md": false 00:10:02.309 }, 00:10:02.309 "memory_domains": [ 00:10:02.309 { 00:10:02.309 "dma_device_id": "system", 00:10:02.309 "dma_device_type": 1 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.309 "dma_device_type": 2 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "dma_device_id": "system", 00:10:02.309 "dma_device_type": 1 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.309 "dma_device_type": 2 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "dma_device_id": "system", 00:10:02.309 "dma_device_type": 1 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.309 "dma_device_type": 2 00:10:02.309 } 00:10:02.309 ], 00:10:02.309 "driver_specific": { 00:10:02.309 "raid": { 00:10:02.309 "uuid": "4a870968-e548-47b9-9ea5-18c213f8d6a3", 00:10:02.309 "strip_size_kb": 0, 00:10:02.309 "state": "online", 00:10:02.309 "raid_level": "raid1", 00:10:02.309 "superblock": false, 00:10:02.309 "num_base_bdevs": 3, 00:10:02.309 "num_base_bdevs_discovered": 3, 00:10:02.309 "num_base_bdevs_operational": 3, 00:10:02.309 "base_bdevs_list": [ 00:10:02.309 { 00:10:02.309 "name": "NewBaseBdev", 00:10:02.309 "uuid": "50c00b7a-4fa5-4fec-b19f-eacbac4e1a2a", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 0, 00:10:02.309 "data_size": 65536 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "name": "BaseBdev2", 00:10:02.309 "uuid": "4894516e-e926-4e95-aa93-65c5049b0e2b", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 0, 00:10:02.309 "data_size": 65536 00:10:02.309 }, 00:10:02.309 { 00:10:02.309 "name": "BaseBdev3", 00:10:02.309 "uuid": "9cd1ce2a-668f-4042-a4e6-a95516dafaea", 00:10:02.309 "is_configured": true, 00:10:02.309 "data_offset": 0, 00:10:02.309 "data_size": 65536 00:10:02.309 } 00:10:02.309 ] 00:10:02.309 } 00:10:02.309 } 00:10:02.309 }' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:02.309 BaseBdev2 00:10:02.309 BaseBdev3' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.309 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.568 12:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.568 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.568 [2024-11-06 12:40:51.138442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.568 [2024-11-06 12:40:51.138494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.569 [2024-11-06 12:40:51.138621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.569 [2024-11-06 12:40:51.139055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.569 [2024-11-06 12:40:51.139086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67457 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67457 ']' 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67457 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67457 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:02.569 killing process with pid 67457 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67457' 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67457 00:10:02.569 [2024-11-06 12:40:51.178503] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.569 12:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67457 00:10:02.827 [2024-11-06 12:40:51.472684] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.204 12:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.205 00:10:04.205 real 0m11.988s 00:10:04.205 user 0m19.734s 00:10:04.205 sys 0m1.708s 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.205 ************************************ 00:10:04.205 END TEST raid_state_function_test 00:10:04.205 ************************************ 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.205 12:40:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:04.205 12:40:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:04.205 12:40:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.205 12:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.205 ************************************ 00:10:04.205 START TEST raid_state_function_test_sb 00:10:04.205 ************************************ 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68095 00:10:04.205 Process raid pid: 68095 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68095' 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68095 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68095 ']' 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.205 12:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.205 [2024-11-06 12:40:52.782050] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:10:04.205 [2024-11-06 12:40:52.782270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.463 [2024-11-06 12:40:52.972991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.721 [2024-11-06 12:40:53.170114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.980 [2024-11-06 12:40:53.400226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.980 [2024-11-06 12:40:53.400290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.238 [2024-11-06 12:40:53.798322] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.238 [2024-11-06 12:40:53.798398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.238 [2024-11-06 12:40:53.798417] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.238 [2024-11-06 12:40:53.798434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.238 [2024-11-06 12:40:53.798445] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.238 [2024-11-06 12:40:53.798460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.238 "name": "Existed_Raid", 00:10:05.238 "uuid": "11524f10-318c-4b60-bee2-7022412de285", 00:10:05.238 "strip_size_kb": 0, 00:10:05.238 "state": "configuring", 00:10:05.238 "raid_level": "raid1", 00:10:05.238 "superblock": true, 00:10:05.238 "num_base_bdevs": 3, 00:10:05.238 "num_base_bdevs_discovered": 0, 00:10:05.238 "num_base_bdevs_operational": 3, 00:10:05.238 "base_bdevs_list": [ 00:10:05.238 { 00:10:05.238 "name": "BaseBdev1", 00:10:05.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.238 "is_configured": false, 00:10:05.238 "data_offset": 0, 00:10:05.238 "data_size": 0 00:10:05.238 }, 00:10:05.238 { 00:10:05.238 "name": "BaseBdev2", 00:10:05.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.238 "is_configured": false, 00:10:05.238 "data_offset": 0, 00:10:05.238 "data_size": 0 00:10:05.238 }, 00:10:05.238 { 00:10:05.238 "name": "BaseBdev3", 00:10:05.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.238 "is_configured": false, 00:10:05.238 "data_offset": 0, 00:10:05.238 "data_size": 0 00:10:05.238 } 00:10:05.238 ] 00:10:05.238 }' 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.238 12:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.806 [2024-11-06 12:40:54.354369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.806 [2024-11-06 12:40:54.354426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.806 [2024-11-06 12:40:54.362337] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.806 [2024-11-06 12:40:54.362396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.806 [2024-11-06 12:40:54.362412] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.806 [2024-11-06 12:40:54.362428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.806 [2024-11-06 12:40:54.362438] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.806 [2024-11-06 12:40:54.362453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.806 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.806 [2024-11-06 12:40:54.411914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.807 BaseBdev1 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.807 [ 00:10:05.807 { 00:10:05.807 "name": "BaseBdev1", 00:10:05.807 "aliases": [ 00:10:05.807 "79e73659-2808-43e6-8c73-15fce49b9e4e" 00:10:05.807 ], 00:10:05.807 "product_name": "Malloc disk", 00:10:05.807 "block_size": 512, 00:10:05.807 "num_blocks": 65536, 00:10:05.807 "uuid": "79e73659-2808-43e6-8c73-15fce49b9e4e", 00:10:05.807 "assigned_rate_limits": { 00:10:05.807 "rw_ios_per_sec": 0, 00:10:05.807 "rw_mbytes_per_sec": 0, 00:10:05.807 "r_mbytes_per_sec": 0, 00:10:05.807 "w_mbytes_per_sec": 0 00:10:05.807 }, 00:10:05.807 "claimed": true, 00:10:05.807 "claim_type": "exclusive_write", 00:10:05.807 "zoned": false, 00:10:05.807 "supported_io_types": { 00:10:05.807 "read": true, 00:10:05.807 "write": true, 00:10:05.807 "unmap": true, 00:10:05.807 "flush": true, 00:10:05.807 "reset": true, 00:10:05.807 "nvme_admin": false, 00:10:05.807 "nvme_io": false, 00:10:05.807 "nvme_io_md": false, 00:10:05.807 "write_zeroes": true, 00:10:05.807 "zcopy": true, 00:10:05.807 "get_zone_info": false, 00:10:05.807 "zone_management": false, 00:10:05.807 "zone_append": false, 00:10:05.807 "compare": false, 00:10:05.807 "compare_and_write": false, 00:10:05.807 "abort": true, 00:10:05.807 "seek_hole": false, 00:10:05.807 "seek_data": false, 00:10:05.807 "copy": true, 00:10:05.807 "nvme_iov_md": false 00:10:05.807 }, 00:10:05.807 "memory_domains": [ 00:10:05.807 { 00:10:05.807 "dma_device_id": "system", 00:10:05.807 "dma_device_type": 1 00:10:05.807 }, 00:10:05.807 { 00:10:05.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.807 "dma_device_type": 2 00:10:05.807 } 00:10:05.807 ], 00:10:05.807 "driver_specific": {} 00:10:05.807 } 00:10:05.807 ] 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.807 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.065 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.065 "name": "Existed_Raid", 00:10:06.065 "uuid": "849c4790-06ca-4fba-a198-515d83318ba2", 00:10:06.065 "strip_size_kb": 0, 00:10:06.065 "state": "configuring", 00:10:06.065 "raid_level": "raid1", 00:10:06.065 "superblock": true, 00:10:06.065 "num_base_bdevs": 3, 00:10:06.065 "num_base_bdevs_discovered": 1, 00:10:06.065 "num_base_bdevs_operational": 3, 00:10:06.065 "base_bdevs_list": [ 00:10:06.065 { 00:10:06.065 "name": "BaseBdev1", 00:10:06.065 "uuid": "79e73659-2808-43e6-8c73-15fce49b9e4e", 00:10:06.065 "is_configured": true, 00:10:06.065 "data_offset": 2048, 00:10:06.065 "data_size": 63488 00:10:06.065 }, 00:10:06.065 { 00:10:06.065 "name": "BaseBdev2", 00:10:06.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.065 "is_configured": false, 00:10:06.065 "data_offset": 0, 00:10:06.065 "data_size": 0 00:10:06.065 }, 00:10:06.065 { 00:10:06.065 "name": "BaseBdev3", 00:10:06.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.065 "is_configured": false, 00:10:06.065 "data_offset": 0, 00:10:06.065 "data_size": 0 00:10:06.065 } 00:10:06.065 ] 00:10:06.065 }' 00:10:06.065 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.065 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.324 [2024-11-06 12:40:54.920146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.324 [2024-11-06 12:40:54.920398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.324 [2024-11-06 12:40:54.932321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.324 [2024-11-06 12:40:54.935054] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.324 [2024-11-06 12:40:54.935269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.324 [2024-11-06 12:40:54.935299] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.324 [2024-11-06 12:40:54.935344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.324 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.652 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.652 "name": "Existed_Raid", 00:10:06.652 "uuid": "1725147b-99f3-4573-9e08-84a6cf7cd57a", 00:10:06.652 "strip_size_kb": 0, 00:10:06.652 "state": "configuring", 00:10:06.652 "raid_level": "raid1", 00:10:06.652 "superblock": true, 00:10:06.652 "num_base_bdevs": 3, 00:10:06.652 "num_base_bdevs_discovered": 1, 00:10:06.652 "num_base_bdevs_operational": 3, 00:10:06.652 "base_bdevs_list": [ 00:10:06.652 { 00:10:06.652 "name": "BaseBdev1", 00:10:06.652 "uuid": "79e73659-2808-43e6-8c73-15fce49b9e4e", 00:10:06.652 "is_configured": true, 00:10:06.652 "data_offset": 2048, 00:10:06.652 "data_size": 63488 00:10:06.652 }, 00:10:06.652 { 00:10:06.652 "name": "BaseBdev2", 00:10:06.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.652 "is_configured": false, 00:10:06.652 "data_offset": 0, 00:10:06.652 "data_size": 0 00:10:06.652 }, 00:10:06.652 { 00:10:06.652 "name": "BaseBdev3", 00:10:06.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.652 "is_configured": false, 00:10:06.652 "data_offset": 0, 00:10:06.652 "data_size": 0 00:10:06.652 } 00:10:06.652 ] 00:10:06.652 }' 00:10:06.652 12:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.652 12:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.910 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 [2024-11-06 12:40:55.500345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.911 BaseBdev2 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 [ 00:10:06.911 { 00:10:06.911 "name": "BaseBdev2", 00:10:06.911 "aliases": [ 00:10:06.911 "c5700b91-299c-43b9-bc7c-3c54df76cfe8" 00:10:06.911 ], 00:10:06.911 "product_name": "Malloc disk", 00:10:06.911 "block_size": 512, 00:10:06.911 "num_blocks": 65536, 00:10:06.911 "uuid": "c5700b91-299c-43b9-bc7c-3c54df76cfe8", 00:10:06.911 "assigned_rate_limits": { 00:10:06.911 "rw_ios_per_sec": 0, 00:10:06.911 "rw_mbytes_per_sec": 0, 00:10:06.911 "r_mbytes_per_sec": 0, 00:10:06.911 "w_mbytes_per_sec": 0 00:10:06.911 }, 00:10:06.911 "claimed": true, 00:10:06.911 "claim_type": "exclusive_write", 00:10:06.911 "zoned": false, 00:10:06.911 "supported_io_types": { 00:10:06.911 "read": true, 00:10:06.911 "write": true, 00:10:06.911 "unmap": true, 00:10:06.911 "flush": true, 00:10:06.911 "reset": true, 00:10:06.911 "nvme_admin": false, 00:10:06.911 "nvme_io": false, 00:10:06.911 "nvme_io_md": false, 00:10:06.911 "write_zeroes": true, 00:10:06.911 "zcopy": true, 00:10:06.911 "get_zone_info": false, 00:10:06.911 "zone_management": false, 00:10:06.911 "zone_append": false, 00:10:06.911 "compare": false, 00:10:06.911 "compare_and_write": false, 00:10:06.911 "abort": true, 00:10:06.911 "seek_hole": false, 00:10:06.911 "seek_data": false, 00:10:06.911 "copy": true, 00:10:06.911 "nvme_iov_md": false 00:10:06.911 }, 00:10:06.911 "memory_domains": [ 00:10:06.911 { 00:10:06.911 "dma_device_id": "system", 00:10:06.911 "dma_device_type": 1 00:10:06.911 }, 00:10:06.911 { 00:10:06.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.911 "dma_device_type": 2 00:10:06.911 } 00:10:06.911 ], 00:10:06.911 "driver_specific": {} 00:10:06.911 } 00:10:06.911 ] 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.170 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.170 "name": "Existed_Raid", 00:10:07.170 "uuid": "1725147b-99f3-4573-9e08-84a6cf7cd57a", 00:10:07.170 "strip_size_kb": 0, 00:10:07.170 "state": "configuring", 00:10:07.170 "raid_level": "raid1", 00:10:07.170 "superblock": true, 00:10:07.170 "num_base_bdevs": 3, 00:10:07.170 "num_base_bdevs_discovered": 2, 00:10:07.170 "num_base_bdevs_operational": 3, 00:10:07.170 "base_bdevs_list": [ 00:10:07.170 { 00:10:07.170 "name": "BaseBdev1", 00:10:07.170 "uuid": "79e73659-2808-43e6-8c73-15fce49b9e4e", 00:10:07.170 "is_configured": true, 00:10:07.170 "data_offset": 2048, 00:10:07.170 "data_size": 63488 00:10:07.170 }, 00:10:07.170 { 00:10:07.170 "name": "BaseBdev2", 00:10:07.170 "uuid": "c5700b91-299c-43b9-bc7c-3c54df76cfe8", 00:10:07.170 "is_configured": true, 00:10:07.170 "data_offset": 2048, 00:10:07.170 "data_size": 63488 00:10:07.170 }, 00:10:07.170 { 00:10:07.170 "name": "BaseBdev3", 00:10:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.171 "is_configured": false, 00:10:07.171 "data_offset": 0, 00:10:07.171 "data_size": 0 00:10:07.171 } 00:10:07.171 ] 00:10:07.171 }' 00:10:07.171 12:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.171 12:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.429 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.429 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.429 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 [2024-11-06 12:40:56.101570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.688 [2024-11-06 12:40:56.101990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.688 [2024-11-06 12:40:56.102023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:07.688 [2024-11-06 12:40:56.102451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.688 BaseBdev3 00:10:07.688 [2024-11-06 12:40:56.102677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.688 [2024-11-06 12:40:56.102694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:07.688 [2024-11-06 12:40:56.102892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 [ 00:10:07.688 { 00:10:07.688 "name": "BaseBdev3", 00:10:07.688 "aliases": [ 00:10:07.688 "eea6aafb-f811-43e6-85d1-88776e95acea" 00:10:07.688 ], 00:10:07.688 "product_name": "Malloc disk", 00:10:07.688 "block_size": 512, 00:10:07.688 "num_blocks": 65536, 00:10:07.688 "uuid": "eea6aafb-f811-43e6-85d1-88776e95acea", 00:10:07.688 "assigned_rate_limits": { 00:10:07.688 "rw_ios_per_sec": 0, 00:10:07.688 "rw_mbytes_per_sec": 0, 00:10:07.688 "r_mbytes_per_sec": 0, 00:10:07.688 "w_mbytes_per_sec": 0 00:10:07.688 }, 00:10:07.688 "claimed": true, 00:10:07.688 "claim_type": "exclusive_write", 00:10:07.688 "zoned": false, 00:10:07.688 "supported_io_types": { 00:10:07.688 "read": true, 00:10:07.688 "write": true, 00:10:07.688 "unmap": true, 00:10:07.688 "flush": true, 00:10:07.688 "reset": true, 00:10:07.688 "nvme_admin": false, 00:10:07.688 "nvme_io": false, 00:10:07.688 "nvme_io_md": false, 00:10:07.688 "write_zeroes": true, 00:10:07.688 "zcopy": true, 00:10:07.688 "get_zone_info": false, 00:10:07.688 "zone_management": false, 00:10:07.688 "zone_append": false, 00:10:07.688 "compare": false, 00:10:07.688 "compare_and_write": false, 00:10:07.688 "abort": true, 00:10:07.688 "seek_hole": false, 00:10:07.688 "seek_data": false, 00:10:07.688 "copy": true, 00:10:07.688 "nvme_iov_md": false 00:10:07.688 }, 00:10:07.688 "memory_domains": [ 00:10:07.688 { 00:10:07.688 "dma_device_id": "system", 00:10:07.688 "dma_device_type": 1 00:10:07.688 }, 00:10:07.688 { 00:10:07.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.688 "dma_device_type": 2 00:10:07.688 } 00:10:07.688 ], 00:10:07.688 "driver_specific": {} 00:10:07.688 } 00:10:07.688 ] 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.688 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.688 "name": "Existed_Raid", 00:10:07.688 "uuid": "1725147b-99f3-4573-9e08-84a6cf7cd57a", 00:10:07.689 "strip_size_kb": 0, 00:10:07.689 "state": "online", 00:10:07.689 "raid_level": "raid1", 00:10:07.689 "superblock": true, 00:10:07.689 "num_base_bdevs": 3, 00:10:07.689 "num_base_bdevs_discovered": 3, 00:10:07.689 "num_base_bdevs_operational": 3, 00:10:07.689 "base_bdevs_list": [ 00:10:07.689 { 00:10:07.689 "name": "BaseBdev1", 00:10:07.689 "uuid": "79e73659-2808-43e6-8c73-15fce49b9e4e", 00:10:07.689 "is_configured": true, 00:10:07.689 "data_offset": 2048, 00:10:07.689 "data_size": 63488 00:10:07.689 }, 00:10:07.689 { 00:10:07.689 "name": "BaseBdev2", 00:10:07.689 "uuid": "c5700b91-299c-43b9-bc7c-3c54df76cfe8", 00:10:07.689 "is_configured": true, 00:10:07.689 "data_offset": 2048, 00:10:07.689 "data_size": 63488 00:10:07.689 }, 00:10:07.689 { 00:10:07.689 "name": "BaseBdev3", 00:10:07.689 "uuid": "eea6aafb-f811-43e6-85d1-88776e95acea", 00:10:07.689 "is_configured": true, 00:10:07.689 "data_offset": 2048, 00:10:07.689 "data_size": 63488 00:10:07.689 } 00:10:07.689 ] 00:10:07.689 }' 00:10:07.689 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.689 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.256 [2024-11-06 12:40:56.682424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.256 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.256 "name": "Existed_Raid", 00:10:08.256 "aliases": [ 00:10:08.256 "1725147b-99f3-4573-9e08-84a6cf7cd57a" 00:10:08.256 ], 00:10:08.256 "product_name": "Raid Volume", 00:10:08.256 "block_size": 512, 00:10:08.256 "num_blocks": 63488, 00:10:08.256 "uuid": "1725147b-99f3-4573-9e08-84a6cf7cd57a", 00:10:08.256 "assigned_rate_limits": { 00:10:08.256 "rw_ios_per_sec": 0, 00:10:08.256 "rw_mbytes_per_sec": 0, 00:10:08.256 "r_mbytes_per_sec": 0, 00:10:08.256 "w_mbytes_per_sec": 0 00:10:08.256 }, 00:10:08.256 "claimed": false, 00:10:08.256 "zoned": false, 00:10:08.256 "supported_io_types": { 00:10:08.256 "read": true, 00:10:08.256 "write": true, 00:10:08.256 "unmap": false, 00:10:08.256 "flush": false, 00:10:08.256 "reset": true, 00:10:08.256 "nvme_admin": false, 00:10:08.256 "nvme_io": false, 00:10:08.256 "nvme_io_md": false, 00:10:08.256 "write_zeroes": true, 00:10:08.256 "zcopy": false, 00:10:08.256 "get_zone_info": false, 00:10:08.256 "zone_management": false, 00:10:08.256 "zone_append": false, 00:10:08.256 "compare": false, 00:10:08.256 "compare_and_write": false, 00:10:08.256 "abort": false, 00:10:08.256 "seek_hole": false, 00:10:08.256 "seek_data": false, 00:10:08.256 "copy": false, 00:10:08.256 "nvme_iov_md": false 00:10:08.256 }, 00:10:08.256 "memory_domains": [ 00:10:08.256 { 00:10:08.256 "dma_device_id": "system", 00:10:08.256 "dma_device_type": 1 00:10:08.256 }, 00:10:08.256 { 00:10:08.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.257 "dma_device_type": 2 00:10:08.257 }, 00:10:08.257 { 00:10:08.257 "dma_device_id": "system", 00:10:08.257 "dma_device_type": 1 00:10:08.257 }, 00:10:08.257 { 00:10:08.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.257 "dma_device_type": 2 00:10:08.257 }, 00:10:08.257 { 00:10:08.257 "dma_device_id": "system", 00:10:08.257 "dma_device_type": 1 00:10:08.257 }, 00:10:08.257 { 00:10:08.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.257 "dma_device_type": 2 00:10:08.257 } 00:10:08.257 ], 00:10:08.257 "driver_specific": { 00:10:08.257 "raid": { 00:10:08.257 "uuid": "1725147b-99f3-4573-9e08-84a6cf7cd57a", 00:10:08.257 "strip_size_kb": 0, 00:10:08.257 "state": "online", 00:10:08.257 "raid_level": "raid1", 00:10:08.257 "superblock": true, 00:10:08.257 "num_base_bdevs": 3, 00:10:08.257 "num_base_bdevs_discovered": 3, 00:10:08.257 "num_base_bdevs_operational": 3, 00:10:08.257 "base_bdevs_list": [ 00:10:08.257 { 00:10:08.257 "name": "BaseBdev1", 00:10:08.257 "uuid": "79e73659-2808-43e6-8c73-15fce49b9e4e", 00:10:08.257 "is_configured": true, 00:10:08.257 "data_offset": 2048, 00:10:08.257 "data_size": 63488 00:10:08.257 }, 00:10:08.257 { 00:10:08.257 "name": "BaseBdev2", 00:10:08.257 "uuid": "c5700b91-299c-43b9-bc7c-3c54df76cfe8", 00:10:08.257 "is_configured": true, 00:10:08.257 "data_offset": 2048, 00:10:08.257 "data_size": 63488 00:10:08.257 }, 00:10:08.257 { 00:10:08.257 "name": "BaseBdev3", 00:10:08.257 "uuid": "eea6aafb-f811-43e6-85d1-88776e95acea", 00:10:08.257 "is_configured": true, 00:10:08.257 "data_offset": 2048, 00:10:08.257 "data_size": 63488 00:10:08.257 } 00:10:08.257 ] 00:10:08.257 } 00:10:08.257 } 00:10:08.257 }' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.257 BaseBdev2 00:10:08.257 BaseBdev3' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.257 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.515 12:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.515 [2024-11-06 12:40:57.001973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.515 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.515 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.515 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.516 "name": "Existed_Raid", 00:10:08.516 "uuid": "1725147b-99f3-4573-9e08-84a6cf7cd57a", 00:10:08.516 "strip_size_kb": 0, 00:10:08.516 "state": "online", 00:10:08.516 "raid_level": "raid1", 00:10:08.516 "superblock": true, 00:10:08.516 "num_base_bdevs": 3, 00:10:08.516 "num_base_bdevs_discovered": 2, 00:10:08.516 "num_base_bdevs_operational": 2, 00:10:08.516 "base_bdevs_list": [ 00:10:08.516 { 00:10:08.516 "name": null, 00:10:08.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.516 "is_configured": false, 00:10:08.516 "data_offset": 0, 00:10:08.516 "data_size": 63488 00:10:08.516 }, 00:10:08.516 { 00:10:08.516 "name": "BaseBdev2", 00:10:08.516 "uuid": "c5700b91-299c-43b9-bc7c-3c54df76cfe8", 00:10:08.516 "is_configured": true, 00:10:08.516 "data_offset": 2048, 00:10:08.516 "data_size": 63488 00:10:08.516 }, 00:10:08.516 { 00:10:08.516 "name": "BaseBdev3", 00:10:08.516 "uuid": "eea6aafb-f811-43e6-85d1-88776e95acea", 00:10:08.516 "is_configured": true, 00:10:08.516 "data_offset": 2048, 00:10:08.516 "data_size": 63488 00:10:08.516 } 00:10:08.516 ] 00:10:08.516 }' 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.516 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.083 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.083 [2024-11-06 12:40:57.678221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.342 [2024-11-06 12:40:57.827003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.342 [2024-11-06 12:40:57.827320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.342 [2024-11-06 12:40:57.920881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.342 [2024-11-06 12:40:57.921139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.342 [2024-11-06 12:40:57.921342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.342 12:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.600 BaseBdev2 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.600 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.600 [ 00:10:09.600 { 00:10:09.600 "name": "BaseBdev2", 00:10:09.600 "aliases": [ 00:10:09.600 "5438e200-93d4-48c1-8499-244aec9b52a3" 00:10:09.600 ], 00:10:09.600 "product_name": "Malloc disk", 00:10:09.600 "block_size": 512, 00:10:09.600 "num_blocks": 65536, 00:10:09.600 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:09.600 "assigned_rate_limits": { 00:10:09.600 "rw_ios_per_sec": 0, 00:10:09.600 "rw_mbytes_per_sec": 0, 00:10:09.600 "r_mbytes_per_sec": 0, 00:10:09.600 "w_mbytes_per_sec": 0 00:10:09.600 }, 00:10:09.600 "claimed": false, 00:10:09.600 "zoned": false, 00:10:09.600 "supported_io_types": { 00:10:09.600 "read": true, 00:10:09.600 "write": true, 00:10:09.600 "unmap": true, 00:10:09.600 "flush": true, 00:10:09.600 "reset": true, 00:10:09.600 "nvme_admin": false, 00:10:09.600 "nvme_io": false, 00:10:09.600 "nvme_io_md": false, 00:10:09.600 "write_zeroes": true, 00:10:09.600 "zcopy": true, 00:10:09.600 "get_zone_info": false, 00:10:09.600 "zone_management": false, 00:10:09.600 "zone_append": false, 00:10:09.600 "compare": false, 00:10:09.600 "compare_and_write": false, 00:10:09.600 "abort": true, 00:10:09.600 "seek_hole": false, 00:10:09.600 "seek_data": false, 00:10:09.600 "copy": true, 00:10:09.600 "nvme_iov_md": false 00:10:09.600 }, 00:10:09.600 "memory_domains": [ 00:10:09.600 { 00:10:09.600 "dma_device_id": "system", 00:10:09.600 "dma_device_type": 1 00:10:09.600 }, 00:10:09.600 { 00:10:09.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.601 "dma_device_type": 2 00:10:09.601 } 00:10:09.601 ], 00:10:09.601 "driver_specific": {} 00:10:09.601 } 00:10:09.601 ] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.601 BaseBdev3 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.601 [ 00:10:09.601 { 00:10:09.601 "name": "BaseBdev3", 00:10:09.601 "aliases": [ 00:10:09.601 "671d03f9-fda5-4615-8713-a899dde89fd3" 00:10:09.601 ], 00:10:09.601 "product_name": "Malloc disk", 00:10:09.601 "block_size": 512, 00:10:09.601 "num_blocks": 65536, 00:10:09.601 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:09.601 "assigned_rate_limits": { 00:10:09.601 "rw_ios_per_sec": 0, 00:10:09.601 "rw_mbytes_per_sec": 0, 00:10:09.601 "r_mbytes_per_sec": 0, 00:10:09.601 "w_mbytes_per_sec": 0 00:10:09.601 }, 00:10:09.601 "claimed": false, 00:10:09.601 "zoned": false, 00:10:09.601 "supported_io_types": { 00:10:09.601 "read": true, 00:10:09.601 "write": true, 00:10:09.601 "unmap": true, 00:10:09.601 "flush": true, 00:10:09.601 "reset": true, 00:10:09.601 "nvme_admin": false, 00:10:09.601 "nvme_io": false, 00:10:09.601 "nvme_io_md": false, 00:10:09.601 "write_zeroes": true, 00:10:09.601 "zcopy": true, 00:10:09.601 "get_zone_info": false, 00:10:09.601 "zone_management": false, 00:10:09.601 "zone_append": false, 00:10:09.601 "compare": false, 00:10:09.601 "compare_and_write": false, 00:10:09.601 "abort": true, 00:10:09.601 "seek_hole": false, 00:10:09.601 "seek_data": false, 00:10:09.601 "copy": true, 00:10:09.601 "nvme_iov_md": false 00:10:09.601 }, 00:10:09.601 "memory_domains": [ 00:10:09.601 { 00:10:09.601 "dma_device_id": "system", 00:10:09.601 "dma_device_type": 1 00:10:09.601 }, 00:10:09.601 { 00:10:09.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.601 "dma_device_type": 2 00:10:09.601 } 00:10:09.601 ], 00:10:09.601 "driver_specific": {} 00:10:09.601 } 00:10:09.601 ] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.601 [2024-11-06 12:40:58.142254] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.601 [2024-11-06 12:40:58.142454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.601 [2024-11-06 12:40:58.142592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.601 [2024-11-06 12:40:58.145314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.601 "name": "Existed_Raid", 00:10:09.601 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:09.601 "strip_size_kb": 0, 00:10:09.601 "state": "configuring", 00:10:09.601 "raid_level": "raid1", 00:10:09.601 "superblock": true, 00:10:09.601 "num_base_bdevs": 3, 00:10:09.601 "num_base_bdevs_discovered": 2, 00:10:09.601 "num_base_bdevs_operational": 3, 00:10:09.601 "base_bdevs_list": [ 00:10:09.601 { 00:10:09.601 "name": "BaseBdev1", 00:10:09.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.601 "is_configured": false, 00:10:09.601 "data_offset": 0, 00:10:09.601 "data_size": 0 00:10:09.601 }, 00:10:09.601 { 00:10:09.601 "name": "BaseBdev2", 00:10:09.601 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:09.601 "is_configured": true, 00:10:09.601 "data_offset": 2048, 00:10:09.601 "data_size": 63488 00:10:09.601 }, 00:10:09.601 { 00:10:09.601 "name": "BaseBdev3", 00:10:09.601 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:09.601 "is_configured": true, 00:10:09.601 "data_offset": 2048, 00:10:09.601 "data_size": 63488 00:10:09.601 } 00:10:09.601 ] 00:10:09.601 }' 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.601 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.169 [2024-11-06 12:40:58.662416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.169 "name": "Existed_Raid", 00:10:10.169 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:10.169 "strip_size_kb": 0, 00:10:10.169 "state": "configuring", 00:10:10.169 "raid_level": "raid1", 00:10:10.169 "superblock": true, 00:10:10.169 "num_base_bdevs": 3, 00:10:10.169 "num_base_bdevs_discovered": 1, 00:10:10.169 "num_base_bdevs_operational": 3, 00:10:10.169 "base_bdevs_list": [ 00:10:10.169 { 00:10:10.169 "name": "BaseBdev1", 00:10:10.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.169 "is_configured": false, 00:10:10.169 "data_offset": 0, 00:10:10.169 "data_size": 0 00:10:10.169 }, 00:10:10.169 { 00:10:10.169 "name": null, 00:10:10.169 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:10.169 "is_configured": false, 00:10:10.169 "data_offset": 0, 00:10:10.169 "data_size": 63488 00:10:10.169 }, 00:10:10.169 { 00:10:10.169 "name": "BaseBdev3", 00:10:10.169 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:10.169 "is_configured": true, 00:10:10.169 "data_offset": 2048, 00:10:10.169 "data_size": 63488 00:10:10.169 } 00:10:10.169 ] 00:10:10.169 }' 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.169 12:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.737 [2024-11-06 12:40:59.264860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.737 BaseBdev1 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.737 [ 00:10:10.737 { 00:10:10.737 "name": "BaseBdev1", 00:10:10.737 "aliases": [ 00:10:10.737 "acb59b45-1a9f-4914-94c9-0aacd8fe1e44" 00:10:10.737 ], 00:10:10.737 "product_name": "Malloc disk", 00:10:10.737 "block_size": 512, 00:10:10.737 "num_blocks": 65536, 00:10:10.737 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:10.737 "assigned_rate_limits": { 00:10:10.737 "rw_ios_per_sec": 0, 00:10:10.737 "rw_mbytes_per_sec": 0, 00:10:10.737 "r_mbytes_per_sec": 0, 00:10:10.737 "w_mbytes_per_sec": 0 00:10:10.737 }, 00:10:10.737 "claimed": true, 00:10:10.737 "claim_type": "exclusive_write", 00:10:10.737 "zoned": false, 00:10:10.737 "supported_io_types": { 00:10:10.737 "read": true, 00:10:10.737 "write": true, 00:10:10.737 "unmap": true, 00:10:10.737 "flush": true, 00:10:10.737 "reset": true, 00:10:10.737 "nvme_admin": false, 00:10:10.737 "nvme_io": false, 00:10:10.737 "nvme_io_md": false, 00:10:10.737 "write_zeroes": true, 00:10:10.737 "zcopy": true, 00:10:10.737 "get_zone_info": false, 00:10:10.737 "zone_management": false, 00:10:10.737 "zone_append": false, 00:10:10.737 "compare": false, 00:10:10.737 "compare_and_write": false, 00:10:10.737 "abort": true, 00:10:10.737 "seek_hole": false, 00:10:10.737 "seek_data": false, 00:10:10.737 "copy": true, 00:10:10.737 "nvme_iov_md": false 00:10:10.737 }, 00:10:10.737 "memory_domains": [ 00:10:10.737 { 00:10:10.737 "dma_device_id": "system", 00:10:10.737 "dma_device_type": 1 00:10:10.737 }, 00:10:10.737 { 00:10:10.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.737 "dma_device_type": 2 00:10:10.737 } 00:10:10.737 ], 00:10:10.737 "driver_specific": {} 00:10:10.737 } 00:10:10.737 ] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.737 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.738 "name": "Existed_Raid", 00:10:10.738 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:10.738 "strip_size_kb": 0, 00:10:10.738 "state": "configuring", 00:10:10.738 "raid_level": "raid1", 00:10:10.738 "superblock": true, 00:10:10.738 "num_base_bdevs": 3, 00:10:10.738 "num_base_bdevs_discovered": 2, 00:10:10.738 "num_base_bdevs_operational": 3, 00:10:10.738 "base_bdevs_list": [ 00:10:10.738 { 00:10:10.738 "name": "BaseBdev1", 00:10:10.738 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:10.738 "is_configured": true, 00:10:10.738 "data_offset": 2048, 00:10:10.738 "data_size": 63488 00:10:10.738 }, 00:10:10.738 { 00:10:10.738 "name": null, 00:10:10.738 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:10.738 "is_configured": false, 00:10:10.738 "data_offset": 0, 00:10:10.738 "data_size": 63488 00:10:10.738 }, 00:10:10.738 { 00:10:10.738 "name": "BaseBdev3", 00:10:10.738 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:10.738 "is_configured": true, 00:10:10.738 "data_offset": 2048, 00:10:10.738 "data_size": 63488 00:10:10.738 } 00:10:10.738 ] 00:10:10.738 }' 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.738 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.305 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.305 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.305 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 [2024-11-06 12:40:59.881118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.306 "name": "Existed_Raid", 00:10:11.306 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:11.306 "strip_size_kb": 0, 00:10:11.306 "state": "configuring", 00:10:11.306 "raid_level": "raid1", 00:10:11.306 "superblock": true, 00:10:11.306 "num_base_bdevs": 3, 00:10:11.306 "num_base_bdevs_discovered": 1, 00:10:11.306 "num_base_bdevs_operational": 3, 00:10:11.306 "base_bdevs_list": [ 00:10:11.306 { 00:10:11.306 "name": "BaseBdev1", 00:10:11.306 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:11.306 "is_configured": true, 00:10:11.306 "data_offset": 2048, 00:10:11.306 "data_size": 63488 00:10:11.306 }, 00:10:11.306 { 00:10:11.306 "name": null, 00:10:11.306 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:11.306 "is_configured": false, 00:10:11.306 "data_offset": 0, 00:10:11.306 "data_size": 63488 00:10:11.306 }, 00:10:11.306 { 00:10:11.306 "name": null, 00:10:11.306 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:11.306 "is_configured": false, 00:10:11.306 "data_offset": 0, 00:10:11.306 "data_size": 63488 00:10:11.306 } 00:10:11.306 ] 00:10:11.306 }' 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.306 12:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.873 [2024-11-06 12:41:00.497341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.873 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.131 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.131 "name": "Existed_Raid", 00:10:12.131 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:12.131 "strip_size_kb": 0, 00:10:12.131 "state": "configuring", 00:10:12.131 "raid_level": "raid1", 00:10:12.131 "superblock": true, 00:10:12.131 "num_base_bdevs": 3, 00:10:12.131 "num_base_bdevs_discovered": 2, 00:10:12.131 "num_base_bdevs_operational": 3, 00:10:12.131 "base_bdevs_list": [ 00:10:12.131 { 00:10:12.131 "name": "BaseBdev1", 00:10:12.131 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:12.131 "is_configured": true, 00:10:12.131 "data_offset": 2048, 00:10:12.131 "data_size": 63488 00:10:12.131 }, 00:10:12.131 { 00:10:12.131 "name": null, 00:10:12.131 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:12.131 "is_configured": false, 00:10:12.131 "data_offset": 0, 00:10:12.131 "data_size": 63488 00:10:12.131 }, 00:10:12.131 { 00:10:12.131 "name": "BaseBdev3", 00:10:12.131 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:12.131 "is_configured": true, 00:10:12.131 "data_offset": 2048, 00:10:12.131 "data_size": 63488 00:10:12.131 } 00:10:12.131 ] 00:10:12.131 }' 00:10:12.131 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.131 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.390 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.390 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.390 12:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.390 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.390 12:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.390 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:12.390 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.390 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.390 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.390 [2024-11-06 12:41:01.033503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.683 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.683 "name": "Existed_Raid", 00:10:12.683 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:12.683 "strip_size_kb": 0, 00:10:12.683 "state": "configuring", 00:10:12.683 "raid_level": "raid1", 00:10:12.683 "superblock": true, 00:10:12.683 "num_base_bdevs": 3, 00:10:12.684 "num_base_bdevs_discovered": 1, 00:10:12.684 "num_base_bdevs_operational": 3, 00:10:12.684 "base_bdevs_list": [ 00:10:12.684 { 00:10:12.684 "name": null, 00:10:12.684 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:12.684 "is_configured": false, 00:10:12.684 "data_offset": 0, 00:10:12.684 "data_size": 63488 00:10:12.684 }, 00:10:12.684 { 00:10:12.684 "name": null, 00:10:12.684 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:12.684 "is_configured": false, 00:10:12.684 "data_offset": 0, 00:10:12.684 "data_size": 63488 00:10:12.684 }, 00:10:12.684 { 00:10:12.684 "name": "BaseBdev3", 00:10:12.684 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:12.684 "is_configured": true, 00:10:12.684 "data_offset": 2048, 00:10:12.684 "data_size": 63488 00:10:12.684 } 00:10:12.684 ] 00:10:12.684 }' 00:10:12.684 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.684 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.250 [2024-11-06 12:41:01.674343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.250 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.250 "name": "Existed_Raid", 00:10:13.250 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:13.250 "strip_size_kb": 0, 00:10:13.250 "state": "configuring", 00:10:13.250 "raid_level": "raid1", 00:10:13.250 "superblock": true, 00:10:13.250 "num_base_bdevs": 3, 00:10:13.250 "num_base_bdevs_discovered": 2, 00:10:13.250 "num_base_bdevs_operational": 3, 00:10:13.250 "base_bdevs_list": [ 00:10:13.250 { 00:10:13.250 "name": null, 00:10:13.250 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:13.250 "is_configured": false, 00:10:13.250 "data_offset": 0, 00:10:13.250 "data_size": 63488 00:10:13.250 }, 00:10:13.250 { 00:10:13.250 "name": "BaseBdev2", 00:10:13.250 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:13.250 "is_configured": true, 00:10:13.250 "data_offset": 2048, 00:10:13.250 "data_size": 63488 00:10:13.250 }, 00:10:13.250 { 00:10:13.250 "name": "BaseBdev3", 00:10:13.251 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:13.251 "is_configured": true, 00:10:13.251 "data_offset": 2048, 00:10:13.251 "data_size": 63488 00:10:13.251 } 00:10:13.251 ] 00:10:13.251 }' 00:10:13.251 12:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.251 12:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u acb59b45-1a9f-4914-94c9-0aacd8fe1e44 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 [2024-11-06 12:41:02.391482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.818 [2024-11-06 12:41:02.392041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.818 [2024-11-06 12:41:02.392067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.818 NewBaseBdev 00:10:13.818 [2024-11-06 12:41:02.392435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.818 [2024-11-06 12:41:02.392639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.818 [2024-11-06 12:41:02.392670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:13.818 [2024-11-06 12:41:02.392852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 [ 00:10:13.818 { 00:10:13.818 "name": "NewBaseBdev", 00:10:13.818 "aliases": [ 00:10:13.818 "acb59b45-1a9f-4914-94c9-0aacd8fe1e44" 00:10:13.818 ], 00:10:13.818 "product_name": "Malloc disk", 00:10:13.818 "block_size": 512, 00:10:13.818 "num_blocks": 65536, 00:10:13.818 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:13.818 "assigned_rate_limits": { 00:10:13.818 "rw_ios_per_sec": 0, 00:10:13.818 "rw_mbytes_per_sec": 0, 00:10:13.818 "r_mbytes_per_sec": 0, 00:10:13.818 "w_mbytes_per_sec": 0 00:10:13.818 }, 00:10:13.818 "claimed": true, 00:10:13.818 "claim_type": "exclusive_write", 00:10:13.818 "zoned": false, 00:10:13.818 "supported_io_types": { 00:10:13.818 "read": true, 00:10:13.818 "write": true, 00:10:13.818 "unmap": true, 00:10:13.818 "flush": true, 00:10:13.818 "reset": true, 00:10:13.818 "nvme_admin": false, 00:10:13.818 "nvme_io": false, 00:10:13.818 "nvme_io_md": false, 00:10:13.818 "write_zeroes": true, 00:10:13.818 "zcopy": true, 00:10:13.818 "get_zone_info": false, 00:10:13.818 "zone_management": false, 00:10:13.818 "zone_append": false, 00:10:13.818 "compare": false, 00:10:13.818 "compare_and_write": false, 00:10:13.818 "abort": true, 00:10:13.818 "seek_hole": false, 00:10:13.818 "seek_data": false, 00:10:13.818 "copy": true, 00:10:13.818 "nvme_iov_md": false 00:10:13.818 }, 00:10:13.818 "memory_domains": [ 00:10:13.818 { 00:10:13.818 "dma_device_id": "system", 00:10:13.818 "dma_device_type": 1 00:10:13.818 }, 00:10:13.818 { 00:10:13.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.818 "dma_device_type": 2 00:10:13.818 } 00:10:13.818 ], 00:10:13.818 "driver_specific": {} 00:10:13.818 } 00:10:13.818 ] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.077 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.077 "name": "Existed_Raid", 00:10:14.077 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:14.077 "strip_size_kb": 0, 00:10:14.077 "state": "online", 00:10:14.077 "raid_level": "raid1", 00:10:14.077 "superblock": true, 00:10:14.077 "num_base_bdevs": 3, 00:10:14.077 "num_base_bdevs_discovered": 3, 00:10:14.077 "num_base_bdevs_operational": 3, 00:10:14.077 "base_bdevs_list": [ 00:10:14.077 { 00:10:14.077 "name": "NewBaseBdev", 00:10:14.077 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:14.077 "is_configured": true, 00:10:14.077 "data_offset": 2048, 00:10:14.077 "data_size": 63488 00:10:14.077 }, 00:10:14.077 { 00:10:14.077 "name": "BaseBdev2", 00:10:14.077 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:14.077 "is_configured": true, 00:10:14.077 "data_offset": 2048, 00:10:14.077 "data_size": 63488 00:10:14.077 }, 00:10:14.077 { 00:10:14.077 "name": "BaseBdev3", 00:10:14.077 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:14.077 "is_configured": true, 00:10:14.077 "data_offset": 2048, 00:10:14.077 "data_size": 63488 00:10:14.077 } 00:10:14.077 ] 00:10:14.077 }' 00:10:14.077 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.077 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.336 [2024-11-06 12:41:02.952087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.336 12:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.595 "name": "Existed_Raid", 00:10:14.595 "aliases": [ 00:10:14.595 "c0b6a17c-dac0-429c-93ea-710cd9950b32" 00:10:14.595 ], 00:10:14.595 "product_name": "Raid Volume", 00:10:14.595 "block_size": 512, 00:10:14.595 "num_blocks": 63488, 00:10:14.595 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:14.595 "assigned_rate_limits": { 00:10:14.595 "rw_ios_per_sec": 0, 00:10:14.595 "rw_mbytes_per_sec": 0, 00:10:14.595 "r_mbytes_per_sec": 0, 00:10:14.595 "w_mbytes_per_sec": 0 00:10:14.595 }, 00:10:14.595 "claimed": false, 00:10:14.595 "zoned": false, 00:10:14.595 "supported_io_types": { 00:10:14.595 "read": true, 00:10:14.595 "write": true, 00:10:14.595 "unmap": false, 00:10:14.595 "flush": false, 00:10:14.595 "reset": true, 00:10:14.595 "nvme_admin": false, 00:10:14.595 "nvme_io": false, 00:10:14.595 "nvme_io_md": false, 00:10:14.595 "write_zeroes": true, 00:10:14.595 "zcopy": false, 00:10:14.595 "get_zone_info": false, 00:10:14.595 "zone_management": false, 00:10:14.595 "zone_append": false, 00:10:14.595 "compare": false, 00:10:14.595 "compare_and_write": false, 00:10:14.595 "abort": false, 00:10:14.595 "seek_hole": false, 00:10:14.595 "seek_data": false, 00:10:14.595 "copy": false, 00:10:14.595 "nvme_iov_md": false 00:10:14.595 }, 00:10:14.595 "memory_domains": [ 00:10:14.595 { 00:10:14.595 "dma_device_id": "system", 00:10:14.595 "dma_device_type": 1 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.595 "dma_device_type": 2 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "dma_device_id": "system", 00:10:14.595 "dma_device_type": 1 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.595 "dma_device_type": 2 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "dma_device_id": "system", 00:10:14.595 "dma_device_type": 1 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.595 "dma_device_type": 2 00:10:14.595 } 00:10:14.595 ], 00:10:14.595 "driver_specific": { 00:10:14.595 "raid": { 00:10:14.595 "uuid": "c0b6a17c-dac0-429c-93ea-710cd9950b32", 00:10:14.595 "strip_size_kb": 0, 00:10:14.595 "state": "online", 00:10:14.595 "raid_level": "raid1", 00:10:14.595 "superblock": true, 00:10:14.595 "num_base_bdevs": 3, 00:10:14.595 "num_base_bdevs_discovered": 3, 00:10:14.595 "num_base_bdevs_operational": 3, 00:10:14.595 "base_bdevs_list": [ 00:10:14.595 { 00:10:14.595 "name": "NewBaseBdev", 00:10:14.595 "uuid": "acb59b45-1a9f-4914-94c9-0aacd8fe1e44", 00:10:14.595 "is_configured": true, 00:10:14.595 "data_offset": 2048, 00:10:14.595 "data_size": 63488 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "name": "BaseBdev2", 00:10:14.595 "uuid": "5438e200-93d4-48c1-8499-244aec9b52a3", 00:10:14.595 "is_configured": true, 00:10:14.595 "data_offset": 2048, 00:10:14.595 "data_size": 63488 00:10:14.595 }, 00:10:14.595 { 00:10:14.595 "name": "BaseBdev3", 00:10:14.595 "uuid": "671d03f9-fda5-4615-8713-a899dde89fd3", 00:10:14.595 "is_configured": true, 00:10:14.595 "data_offset": 2048, 00:10:14.595 "data_size": 63488 00:10:14.595 } 00:10:14.595 ] 00:10:14.595 } 00:10:14.595 } 00:10:14.595 }' 00:10:14.595 12:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:14.595 BaseBdev2 00:10:14.595 BaseBdev3' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.595 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 [2024-11-06 12:41:03.247731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.595 [2024-11-06 12:41:03.247897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.595 [2024-11-06 12:41:03.248105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.595 [2024-11-06 12:41:03.248627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.854 [2024-11-06 12:41:03.248783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68095 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68095 ']' 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68095 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68095 00:10:14.854 killing process with pid 68095 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68095' 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68095 00:10:14.854 [2024-11-06 12:41:03.285400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.854 12:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68095 00:10:15.112 [2024-11-06 12:41:03.576136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.488 12:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.488 00:10:16.488 real 0m12.048s 00:10:16.488 user 0m19.738s 00:10:16.488 sys 0m1.772s 00:10:16.488 ************************************ 00:10:16.488 END TEST raid_state_function_test_sb 00:10:16.488 ************************************ 00:10:16.488 12:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.488 12:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.488 12:41:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:16.488 12:41:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:16.488 12:41:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.488 12:41:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.488 ************************************ 00:10:16.488 START TEST raid_superblock_test 00:10:16.488 ************************************ 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68732 00:10:16.488 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68732 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68732 ']' 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.489 12:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.489 [2024-11-06 12:41:04.870267] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:10:16.489 [2024-11-06 12:41:04.870670] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68732 ] 00:10:16.489 [2024-11-06 12:41:05.052957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.747 [2024-11-06 12:41:05.206998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.005 [2024-11-06 12:41:05.413560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.005 [2024-11-06 12:41:05.413854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.265 malloc1 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.265 [2024-11-06 12:41:05.906270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.265 [2024-11-06 12:41:05.906362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.265 [2024-11-06 12:41:05.906397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:17.265 [2024-11-06 12:41:05.906412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.265 [2024-11-06 12:41:05.909204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.265 [2024-11-06 12:41:05.909261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.265 pt1 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.265 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.533 malloc2 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.533 [2024-11-06 12:41:05.958316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.533 [2024-11-06 12:41:05.959141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.533 [2024-11-06 12:41:05.959287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:17.533 [2024-11-06 12:41:05.959325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.533 [2024-11-06 12:41:05.965508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.533 pt2 00:10:17.533 [2024-11-06 12:41:05.965854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.533 12:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.533 malloc3 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.533 [2024-11-06 12:41:06.038129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.533 [2024-11-06 12:41:06.038219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.533 [2024-11-06 12:41:06.038261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:17.533 [2024-11-06 12:41:06.038281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.533 [2024-11-06 12:41:06.041698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.533 [2024-11-06 12:41:06.041751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.533 pt3 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.533 [2024-11-06 12:41:06.046507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.533 [2024-11-06 12:41:06.049578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.533 [2024-11-06 12:41:06.049698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.533 [2024-11-06 12:41:06.049958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:17.533 [2024-11-06 12:41:06.049991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.533 [2024-11-06 12:41:06.050373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.533 [2024-11-06 12:41:06.050654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:17.533 [2024-11-06 12:41:06.050679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:17.533 [2024-11-06 12:41:06.050955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.533 "name": "raid_bdev1", 00:10:17.533 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:17.533 "strip_size_kb": 0, 00:10:17.533 "state": "online", 00:10:17.533 "raid_level": "raid1", 00:10:17.533 "superblock": true, 00:10:17.533 "num_base_bdevs": 3, 00:10:17.533 "num_base_bdevs_discovered": 3, 00:10:17.533 "num_base_bdevs_operational": 3, 00:10:17.533 "base_bdevs_list": [ 00:10:17.533 { 00:10:17.533 "name": "pt1", 00:10:17.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.533 "is_configured": true, 00:10:17.533 "data_offset": 2048, 00:10:17.533 "data_size": 63488 00:10:17.533 }, 00:10:17.533 { 00:10:17.533 "name": "pt2", 00:10:17.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.533 "is_configured": true, 00:10:17.533 "data_offset": 2048, 00:10:17.533 "data_size": 63488 00:10:17.533 }, 00:10:17.533 { 00:10:17.533 "name": "pt3", 00:10:17.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.533 "is_configured": true, 00:10:17.533 "data_offset": 2048, 00:10:17.533 "data_size": 63488 00:10:17.533 } 00:10:17.533 ] 00:10:17.533 }' 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.533 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.100 [2024-11-06 12:41:06.575531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.100 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.100 "name": "raid_bdev1", 00:10:18.100 "aliases": [ 00:10:18.100 "49f6491f-9715-4daa-8d76-79cf1af2629e" 00:10:18.100 ], 00:10:18.100 "product_name": "Raid Volume", 00:10:18.100 "block_size": 512, 00:10:18.100 "num_blocks": 63488, 00:10:18.100 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:18.100 "assigned_rate_limits": { 00:10:18.100 "rw_ios_per_sec": 0, 00:10:18.100 "rw_mbytes_per_sec": 0, 00:10:18.100 "r_mbytes_per_sec": 0, 00:10:18.100 "w_mbytes_per_sec": 0 00:10:18.100 }, 00:10:18.100 "claimed": false, 00:10:18.100 "zoned": false, 00:10:18.100 "supported_io_types": { 00:10:18.100 "read": true, 00:10:18.100 "write": true, 00:10:18.100 "unmap": false, 00:10:18.100 "flush": false, 00:10:18.100 "reset": true, 00:10:18.100 "nvme_admin": false, 00:10:18.100 "nvme_io": false, 00:10:18.100 "nvme_io_md": false, 00:10:18.100 "write_zeroes": true, 00:10:18.100 "zcopy": false, 00:10:18.100 "get_zone_info": false, 00:10:18.100 "zone_management": false, 00:10:18.100 "zone_append": false, 00:10:18.100 "compare": false, 00:10:18.100 "compare_and_write": false, 00:10:18.100 "abort": false, 00:10:18.100 "seek_hole": false, 00:10:18.100 "seek_data": false, 00:10:18.100 "copy": false, 00:10:18.100 "nvme_iov_md": false 00:10:18.100 }, 00:10:18.100 "memory_domains": [ 00:10:18.100 { 00:10:18.100 "dma_device_id": "system", 00:10:18.100 "dma_device_type": 1 00:10:18.100 }, 00:10:18.100 { 00:10:18.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.100 "dma_device_type": 2 00:10:18.100 }, 00:10:18.100 { 00:10:18.100 "dma_device_id": "system", 00:10:18.100 "dma_device_type": 1 00:10:18.100 }, 00:10:18.100 { 00:10:18.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.100 "dma_device_type": 2 00:10:18.100 }, 00:10:18.100 { 00:10:18.100 "dma_device_id": "system", 00:10:18.101 "dma_device_type": 1 00:10:18.101 }, 00:10:18.101 { 00:10:18.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.101 "dma_device_type": 2 00:10:18.101 } 00:10:18.101 ], 00:10:18.101 "driver_specific": { 00:10:18.101 "raid": { 00:10:18.101 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:18.101 "strip_size_kb": 0, 00:10:18.101 "state": "online", 00:10:18.101 "raid_level": "raid1", 00:10:18.101 "superblock": true, 00:10:18.101 "num_base_bdevs": 3, 00:10:18.101 "num_base_bdevs_discovered": 3, 00:10:18.101 "num_base_bdevs_operational": 3, 00:10:18.101 "base_bdevs_list": [ 00:10:18.101 { 00:10:18.101 "name": "pt1", 00:10:18.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.101 "is_configured": true, 00:10:18.101 "data_offset": 2048, 00:10:18.101 "data_size": 63488 00:10:18.101 }, 00:10:18.101 { 00:10:18.101 "name": "pt2", 00:10:18.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.101 "is_configured": true, 00:10:18.101 "data_offset": 2048, 00:10:18.101 "data_size": 63488 00:10:18.101 }, 00:10:18.101 { 00:10:18.101 "name": "pt3", 00:10:18.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.101 "is_configured": true, 00:10:18.101 "data_offset": 2048, 00:10:18.101 "data_size": 63488 00:10:18.101 } 00:10:18.101 ] 00:10:18.101 } 00:10:18.101 } 00:10:18.101 }' 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.101 pt2 00:10:18.101 pt3' 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.101 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.359 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 [2024-11-06 12:41:06.867442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49f6491f-9715-4daa-8d76-79cf1af2629e 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 49f6491f-9715-4daa-8d76-79cf1af2629e ']' 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 [2024-11-06 12:41:06.919086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.360 [2024-11-06 12:41:06.919250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.360 [2024-11-06 12:41:06.919469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.360 [2024-11-06 12:41:06.919717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.360 [2024-11-06 12:41:06.919844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.360 12:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.360 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.360 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:18.360 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:18.360 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.360 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.618 [2024-11-06 12:41:07.067209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:18.618 [2024-11-06 12:41:07.069808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:18.618 [2024-11-06 12:41:07.069882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:18.618 [2024-11-06 12:41:07.069957] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:18.618 [2024-11-06 12:41:07.070032] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:18.618 [2024-11-06 12:41:07.070065] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:18.618 [2024-11-06 12:41:07.070092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.618 [2024-11-06 12:41:07.070106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:18.618 request: 00:10:18.618 { 00:10:18.618 "name": "raid_bdev1", 00:10:18.618 "raid_level": "raid1", 00:10:18.618 "base_bdevs": [ 00:10:18.618 "malloc1", 00:10:18.618 "malloc2", 00:10:18.618 "malloc3" 00:10:18.618 ], 00:10:18.618 "superblock": false, 00:10:18.618 "method": "bdev_raid_create", 00:10:18.618 "req_id": 1 00:10:18.618 } 00:10:18.618 Got JSON-RPC error response 00:10:18.618 response: 00:10:18.618 { 00:10:18.618 "code": -17, 00:10:18.618 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:18.618 } 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.618 [2024-11-06 12:41:07.131146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.618 [2024-11-06 12:41:07.131373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.618 [2024-11-06 12:41:07.131455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.618 [2024-11-06 12:41:07.131561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.618 [2024-11-06 12:41:07.134575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.618 pt1 00:10:18.618 [2024-11-06 12:41:07.134725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.618 [2024-11-06 12:41:07.134839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.618 [2024-11-06 12:41:07.134907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.618 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.619 "name": "raid_bdev1", 00:10:18.619 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:18.619 "strip_size_kb": 0, 00:10:18.619 "state": "configuring", 00:10:18.619 "raid_level": "raid1", 00:10:18.619 "superblock": true, 00:10:18.619 "num_base_bdevs": 3, 00:10:18.619 "num_base_bdevs_discovered": 1, 00:10:18.619 "num_base_bdevs_operational": 3, 00:10:18.619 "base_bdevs_list": [ 00:10:18.619 { 00:10:18.619 "name": "pt1", 00:10:18.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.619 "is_configured": true, 00:10:18.619 "data_offset": 2048, 00:10:18.619 "data_size": 63488 00:10:18.619 }, 00:10:18.619 { 00:10:18.619 "name": null, 00:10:18.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.619 "is_configured": false, 00:10:18.619 "data_offset": 2048, 00:10:18.619 "data_size": 63488 00:10:18.619 }, 00:10:18.619 { 00:10:18.619 "name": null, 00:10:18.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.619 "is_configured": false, 00:10:18.619 "data_offset": 2048, 00:10:18.619 "data_size": 63488 00:10:18.619 } 00:10:18.619 ] 00:10:18.619 }' 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.619 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.185 [2024-11-06 12:41:07.651361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.185 [2024-11-06 12:41:07.651581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.185 [2024-11-06 12:41:07.651665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:19.185 [2024-11-06 12:41:07.651687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.185 [2024-11-06 12:41:07.652342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.185 [2024-11-06 12:41:07.652376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.185 [2024-11-06 12:41:07.652499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.185 [2024-11-06 12:41:07.652535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.185 pt2 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.185 [2024-11-06 12:41:07.659323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.185 "name": "raid_bdev1", 00:10:19.185 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:19.185 "strip_size_kb": 0, 00:10:19.185 "state": "configuring", 00:10:19.185 "raid_level": "raid1", 00:10:19.185 "superblock": true, 00:10:19.185 "num_base_bdevs": 3, 00:10:19.185 "num_base_bdevs_discovered": 1, 00:10:19.185 "num_base_bdevs_operational": 3, 00:10:19.185 "base_bdevs_list": [ 00:10:19.185 { 00:10:19.185 "name": "pt1", 00:10:19.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.185 "is_configured": true, 00:10:19.185 "data_offset": 2048, 00:10:19.185 "data_size": 63488 00:10:19.185 }, 00:10:19.185 { 00:10:19.185 "name": null, 00:10:19.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.185 "is_configured": false, 00:10:19.185 "data_offset": 0, 00:10:19.185 "data_size": 63488 00:10:19.185 }, 00:10:19.185 { 00:10:19.185 "name": null, 00:10:19.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.185 "is_configured": false, 00:10:19.185 "data_offset": 2048, 00:10:19.185 "data_size": 63488 00:10:19.185 } 00:10:19.185 ] 00:10:19.185 }' 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.185 12:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.751 [2024-11-06 12:41:08.219513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.751 [2024-11-06 12:41:08.219787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.751 [2024-11-06 12:41:08.219862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:19.751 [2024-11-06 12:41:08.220136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.751 [2024-11-06 12:41:08.220844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.751 [2024-11-06 12:41:08.220888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.751 [2024-11-06 12:41:08.221004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.751 [2024-11-06 12:41:08.221062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.751 pt2 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.751 [2024-11-06 12:41:08.231518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.751 [2024-11-06 12:41:08.231761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.751 [2024-11-06 12:41:08.231934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.751 [2024-11-06 12:41:08.232058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.751 [2024-11-06 12:41:08.232764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.751 [2024-11-06 12:41:08.232935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.751 [2024-11-06 12:41:08.233165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:19.751 [2024-11-06 12:41:08.233343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.751 [2024-11-06 12:41:08.233649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:19.751 [2024-11-06 12:41:08.233784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.751 [2024-11-06 12:41:08.234234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:19.751 [2024-11-06 12:41:08.234574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:19.751 [2024-11-06 12:41:08.234691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:19.751 [2024-11-06 12:41:08.235015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.751 pt3 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.751 "name": "raid_bdev1", 00:10:19.751 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:19.751 "strip_size_kb": 0, 00:10:19.751 "state": "online", 00:10:19.751 "raid_level": "raid1", 00:10:19.751 "superblock": true, 00:10:19.751 "num_base_bdevs": 3, 00:10:19.751 "num_base_bdevs_discovered": 3, 00:10:19.751 "num_base_bdevs_operational": 3, 00:10:19.751 "base_bdevs_list": [ 00:10:19.751 { 00:10:19.751 "name": "pt1", 00:10:19.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.751 "is_configured": true, 00:10:19.751 "data_offset": 2048, 00:10:19.751 "data_size": 63488 00:10:19.751 }, 00:10:19.751 { 00:10:19.751 "name": "pt2", 00:10:19.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.751 "is_configured": true, 00:10:19.751 "data_offset": 2048, 00:10:19.751 "data_size": 63488 00:10:19.751 }, 00:10:19.751 { 00:10:19.751 "name": "pt3", 00:10:19.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.751 "is_configured": true, 00:10:19.751 "data_offset": 2048, 00:10:19.751 "data_size": 63488 00:10:19.751 } 00:10:19.751 ] 00:10:19.751 }' 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.751 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.334 [2024-11-06 12:41:08.768086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.334 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.334 "name": "raid_bdev1", 00:10:20.334 "aliases": [ 00:10:20.335 "49f6491f-9715-4daa-8d76-79cf1af2629e" 00:10:20.335 ], 00:10:20.335 "product_name": "Raid Volume", 00:10:20.335 "block_size": 512, 00:10:20.335 "num_blocks": 63488, 00:10:20.335 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:20.335 "assigned_rate_limits": { 00:10:20.335 "rw_ios_per_sec": 0, 00:10:20.335 "rw_mbytes_per_sec": 0, 00:10:20.335 "r_mbytes_per_sec": 0, 00:10:20.335 "w_mbytes_per_sec": 0 00:10:20.335 }, 00:10:20.335 "claimed": false, 00:10:20.335 "zoned": false, 00:10:20.335 "supported_io_types": { 00:10:20.335 "read": true, 00:10:20.335 "write": true, 00:10:20.335 "unmap": false, 00:10:20.335 "flush": false, 00:10:20.335 "reset": true, 00:10:20.335 "nvme_admin": false, 00:10:20.335 "nvme_io": false, 00:10:20.335 "nvme_io_md": false, 00:10:20.335 "write_zeroes": true, 00:10:20.335 "zcopy": false, 00:10:20.335 "get_zone_info": false, 00:10:20.335 "zone_management": false, 00:10:20.335 "zone_append": false, 00:10:20.335 "compare": false, 00:10:20.335 "compare_and_write": false, 00:10:20.335 "abort": false, 00:10:20.335 "seek_hole": false, 00:10:20.335 "seek_data": false, 00:10:20.335 "copy": false, 00:10:20.335 "nvme_iov_md": false 00:10:20.335 }, 00:10:20.335 "memory_domains": [ 00:10:20.335 { 00:10:20.335 "dma_device_id": "system", 00:10:20.335 "dma_device_type": 1 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.335 "dma_device_type": 2 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "dma_device_id": "system", 00:10:20.335 "dma_device_type": 1 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.335 "dma_device_type": 2 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "dma_device_id": "system", 00:10:20.335 "dma_device_type": 1 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.335 "dma_device_type": 2 00:10:20.335 } 00:10:20.335 ], 00:10:20.335 "driver_specific": { 00:10:20.335 "raid": { 00:10:20.335 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:20.335 "strip_size_kb": 0, 00:10:20.335 "state": "online", 00:10:20.335 "raid_level": "raid1", 00:10:20.335 "superblock": true, 00:10:20.335 "num_base_bdevs": 3, 00:10:20.335 "num_base_bdevs_discovered": 3, 00:10:20.335 "num_base_bdevs_operational": 3, 00:10:20.335 "base_bdevs_list": [ 00:10:20.335 { 00:10:20.335 "name": "pt1", 00:10:20.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.335 "is_configured": true, 00:10:20.335 "data_offset": 2048, 00:10:20.335 "data_size": 63488 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "name": "pt2", 00:10:20.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.335 "is_configured": true, 00:10:20.335 "data_offset": 2048, 00:10:20.335 "data_size": 63488 00:10:20.335 }, 00:10:20.335 { 00:10:20.335 "name": "pt3", 00:10:20.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.335 "is_configured": true, 00:10:20.335 "data_offset": 2048, 00:10:20.335 "data_size": 63488 00:10:20.335 } 00:10:20.335 ] 00:10:20.335 } 00:10:20.335 } 00:10:20.335 }' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.335 pt2 00:10:20.335 pt3' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.335 12:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:20.594 [2024-11-06 12:41:09.080161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 49f6491f-9715-4daa-8d76-79cf1af2629e '!=' 49f6491f-9715-4daa-8d76-79cf1af2629e ']' 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.594 [2024-11-06 12:41:09.123802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.594 "name": "raid_bdev1", 00:10:20.594 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:20.594 "strip_size_kb": 0, 00:10:20.594 "state": "online", 00:10:20.594 "raid_level": "raid1", 00:10:20.594 "superblock": true, 00:10:20.594 "num_base_bdevs": 3, 00:10:20.594 "num_base_bdevs_discovered": 2, 00:10:20.594 "num_base_bdevs_operational": 2, 00:10:20.594 "base_bdevs_list": [ 00:10:20.594 { 00:10:20.594 "name": null, 00:10:20.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.594 "is_configured": false, 00:10:20.594 "data_offset": 0, 00:10:20.594 "data_size": 63488 00:10:20.594 }, 00:10:20.594 { 00:10:20.594 "name": "pt2", 00:10:20.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.594 "is_configured": true, 00:10:20.594 "data_offset": 2048, 00:10:20.594 "data_size": 63488 00:10:20.594 }, 00:10:20.594 { 00:10:20.594 "name": "pt3", 00:10:20.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.594 "is_configured": true, 00:10:20.594 "data_offset": 2048, 00:10:20.594 "data_size": 63488 00:10:20.594 } 00:10:20.594 ] 00:10:20.594 }' 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.594 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 [2024-11-06 12:41:09.655992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.162 [2024-11-06 12:41:09.656176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.162 [2024-11-06 12:41:09.656326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.162 [2024-11-06 12:41:09.656422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.162 [2024-11-06 12:41:09.656451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 [2024-11-06 12:41:09.735980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.162 [2024-11-06 12:41:09.736232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.162 [2024-11-06 12:41:09.736331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:21.162 [2024-11-06 12:41:09.736536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.162 [2024-11-06 12:41:09.739569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.162 [2024-11-06 12:41:09.739628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.162 [2024-11-06 12:41:09.739738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.162 [2024-11-06 12:41:09.739813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.162 pt2 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.162 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.162 "name": "raid_bdev1", 00:10:21.162 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:21.162 "strip_size_kb": 0, 00:10:21.162 "state": "configuring", 00:10:21.162 "raid_level": "raid1", 00:10:21.163 "superblock": true, 00:10:21.163 "num_base_bdevs": 3, 00:10:21.163 "num_base_bdevs_discovered": 1, 00:10:21.163 "num_base_bdevs_operational": 2, 00:10:21.163 "base_bdevs_list": [ 00:10:21.163 { 00:10:21.163 "name": null, 00:10:21.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.163 "is_configured": false, 00:10:21.163 "data_offset": 2048, 00:10:21.163 "data_size": 63488 00:10:21.163 }, 00:10:21.163 { 00:10:21.163 "name": "pt2", 00:10:21.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.163 "is_configured": true, 00:10:21.163 "data_offset": 2048, 00:10:21.163 "data_size": 63488 00:10:21.163 }, 00:10:21.163 { 00:10:21.163 "name": null, 00:10:21.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.163 "is_configured": false, 00:10:21.163 "data_offset": 2048, 00:10:21.163 "data_size": 63488 00:10:21.163 } 00:10:21.163 ] 00:10:21.163 }' 00:10:21.163 12:41:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.163 12:41:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.730 [2024-11-06 12:41:10.260222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.730 [2024-11-06 12:41:10.260463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.730 [2024-11-06 12:41:10.260628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:21.730 [2024-11-06 12:41:10.260810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.730 [2024-11-06 12:41:10.261447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.730 [2024-11-06 12:41:10.261491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.730 [2024-11-06 12:41:10.261616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.730 [2024-11-06 12:41:10.261677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.730 [2024-11-06 12:41:10.261836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.730 [2024-11-06 12:41:10.261861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.730 [2024-11-06 12:41:10.262231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:21.730 [2024-11-06 12:41:10.262449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.730 [2024-11-06 12:41:10.262467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:21.730 [2024-11-06 12:41:10.262652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.730 pt3 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.730 "name": "raid_bdev1", 00:10:21.730 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:21.730 "strip_size_kb": 0, 00:10:21.730 "state": "online", 00:10:21.730 "raid_level": "raid1", 00:10:21.730 "superblock": true, 00:10:21.730 "num_base_bdevs": 3, 00:10:21.730 "num_base_bdevs_discovered": 2, 00:10:21.730 "num_base_bdevs_operational": 2, 00:10:21.730 "base_bdevs_list": [ 00:10:21.730 { 00:10:21.730 "name": null, 00:10:21.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.730 "is_configured": false, 00:10:21.730 "data_offset": 2048, 00:10:21.730 "data_size": 63488 00:10:21.730 }, 00:10:21.730 { 00:10:21.730 "name": "pt2", 00:10:21.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.730 "is_configured": true, 00:10:21.730 "data_offset": 2048, 00:10:21.730 "data_size": 63488 00:10:21.730 }, 00:10:21.730 { 00:10:21.730 "name": "pt3", 00:10:21.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.730 "is_configured": true, 00:10:21.730 "data_offset": 2048, 00:10:21.730 "data_size": 63488 00:10:21.730 } 00:10:21.730 ] 00:10:21.730 }' 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.730 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.296 [2024-11-06 12:41:10.792769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.296 [2024-11-06 12:41:10.792861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.296 [2024-11-06 12:41:10.793034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.296 [2024-11-06 12:41:10.793200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.296 [2024-11-06 12:41:10.793231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.296 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.297 [2024-11-06 12:41:10.868743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.297 [2024-11-06 12:41:10.868840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.297 [2024-11-06 12:41:10.868881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:22.297 [2024-11-06 12:41:10.868900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.297 [2024-11-06 12:41:10.872967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.297 pt1 00:10:22.297 [2024-11-06 12:41:10.873228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.297 [2024-11-06 12:41:10.873374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:22.297 [2024-11-06 12:41:10.873452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.297 [2024-11-06 12:41:10.873733] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:22.297 [2024-11-06 12:41:10.873756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.297 [2024-11-06 12:41:10.873785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:22.297 [2024-11-06 12:41:10.873871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.297 "name": "raid_bdev1", 00:10:22.297 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:22.297 "strip_size_kb": 0, 00:10:22.297 "state": "configuring", 00:10:22.297 "raid_level": "raid1", 00:10:22.297 "superblock": true, 00:10:22.297 "num_base_bdevs": 3, 00:10:22.297 "num_base_bdevs_discovered": 1, 00:10:22.297 "num_base_bdevs_operational": 2, 00:10:22.297 "base_bdevs_list": [ 00:10:22.297 { 00:10:22.297 "name": null, 00:10:22.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.297 "is_configured": false, 00:10:22.297 "data_offset": 2048, 00:10:22.297 "data_size": 63488 00:10:22.297 }, 00:10:22.297 { 00:10:22.297 "name": "pt2", 00:10:22.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.297 "is_configured": true, 00:10:22.297 "data_offset": 2048, 00:10:22.297 "data_size": 63488 00:10:22.297 }, 00:10:22.297 { 00:10:22.297 "name": null, 00:10:22.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.297 "is_configured": false, 00:10:22.297 "data_offset": 2048, 00:10:22.297 "data_size": 63488 00:10:22.297 } 00:10:22.297 ] 00:10:22.297 }' 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.297 12:41:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.867 [2024-11-06 12:41:11.453655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.867 [2024-11-06 12:41:11.453902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.867 [2024-11-06 12:41:11.453984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:22.867 [2024-11-06 12:41:11.454007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.867 [2024-11-06 12:41:11.454692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.867 [2024-11-06 12:41:11.454724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.867 [2024-11-06 12:41:11.454855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.867 [2024-11-06 12:41:11.454925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.867 [2024-11-06 12:41:11.455099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:22.867 [2024-11-06 12:41:11.455115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.867 [2024-11-06 12:41:11.455490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:22.867 [2024-11-06 12:41:11.455695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:22.867 [2024-11-06 12:41:11.455719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:22.867 [2024-11-06 12:41:11.455893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.867 pt3 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.867 "name": "raid_bdev1", 00:10:22.867 "uuid": "49f6491f-9715-4daa-8d76-79cf1af2629e", 00:10:22.867 "strip_size_kb": 0, 00:10:22.867 "state": "online", 00:10:22.867 "raid_level": "raid1", 00:10:22.867 "superblock": true, 00:10:22.867 "num_base_bdevs": 3, 00:10:22.867 "num_base_bdevs_discovered": 2, 00:10:22.867 "num_base_bdevs_operational": 2, 00:10:22.867 "base_bdevs_list": [ 00:10:22.867 { 00:10:22.867 "name": null, 00:10:22.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.867 "is_configured": false, 00:10:22.867 "data_offset": 2048, 00:10:22.867 "data_size": 63488 00:10:22.867 }, 00:10:22.867 { 00:10:22.867 "name": "pt2", 00:10:22.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.867 "is_configured": true, 00:10:22.867 "data_offset": 2048, 00:10:22.867 "data_size": 63488 00:10:22.867 }, 00:10:22.867 { 00:10:22.867 "name": "pt3", 00:10:22.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.867 "is_configured": true, 00:10:22.867 "data_offset": 2048, 00:10:22.867 "data_size": 63488 00:10:22.867 } 00:10:22.867 ] 00:10:22.867 }' 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.867 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.444 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:23.444 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.444 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.444 12:41:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:23.444 12:41:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:23.444 [2024-11-06 12:41:12.042426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 49f6491f-9715-4daa-8d76-79cf1af2629e '!=' 49f6491f-9715-4daa-8d76-79cf1af2629e ']' 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68732 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68732 ']' 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68732 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:23.444 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:23.702 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68732 00:10:23.702 killing process with pid 68732 00:10:23.702 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:23.702 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:23.702 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68732' 00:10:23.702 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68732 00:10:23.702 12:41:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68732 00:10:23.702 [2024-11-06 12:41:12.124569] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.702 [2024-11-06 12:41:12.124768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.702 [2024-11-06 12:41:12.124884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.702 [2024-11-06 12:41:12.124915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:23.959 [2024-11-06 12:41:12.412024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.894 12:41:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:24.894 00:10:24.894 real 0m8.745s 00:10:24.894 user 0m14.247s 00:10:24.894 sys 0m1.222s 00:10:24.894 ************************************ 00:10:24.894 END TEST raid_superblock_test 00:10:24.894 ************************************ 00:10:24.894 12:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:24.894 12:41:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.153 12:41:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:25.153 12:41:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:25.153 12:41:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.153 12:41:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.153 ************************************ 00:10:25.153 START TEST raid_read_error_test 00:10:25.153 ************************************ 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ypdK4hCPza 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69183 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69183 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69183 ']' 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:25.153 12:41:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.153 [2024-11-06 12:41:13.706261] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:10:25.153 [2024-11-06 12:41:13.706671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69183 ] 00:10:25.411 [2024-11-06 12:41:13.889638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.411 [2024-11-06 12:41:14.035821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.669 [2024-11-06 12:41:14.263663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.669 [2024-11-06 12:41:14.263726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.278 BaseBdev1_malloc 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.278 true 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.278 [2024-11-06 12:41:14.812129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.278 [2024-11-06 12:41:14.812230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.278 [2024-11-06 12:41:14.812262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.278 [2024-11-06 12:41:14.812281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.278 [2024-11-06 12:41:14.815412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.278 [2024-11-06 12:41:14.815463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.278 BaseBdev1 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.278 BaseBdev2_malloc 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.278 true 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.278 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.279 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.279 [2024-11-06 12:41:14.877681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.279 [2024-11-06 12:41:14.877909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.279 [2024-11-06 12:41:14.877946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.279 [2024-11-06 12:41:14.877965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.279 [2024-11-06 12:41:14.881065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.279 [2024-11-06 12:41:14.881130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.279 BaseBdev2 00:10:26.279 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.279 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.279 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.279 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.279 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 BaseBdev3_malloc 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 true 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 [2024-11-06 12:41:14.952686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:26.537 [2024-11-06 12:41:14.952764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.537 [2024-11-06 12:41:14.952793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:26.537 [2024-11-06 12:41:14.952812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.537 [2024-11-06 12:41:14.955896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.537 BaseBdev3 00:10:26.537 [2024-11-06 12:41:14.956114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 [2024-11-06 12:41:14.960792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.537 [2024-11-06 12:41:14.963545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.537 [2024-11-06 12:41:14.963686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.537 [2024-11-06 12:41:14.964000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.537 [2024-11-06 12:41:14.964020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.537 [2024-11-06 12:41:14.964367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:26.537 [2024-11-06 12:41:14.964602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.537 [2024-11-06 12:41:14.964624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:26.537 [2024-11-06 12:41:14.964896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.537 12:41:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.537 12:41:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.537 "name": "raid_bdev1", 00:10:26.537 "uuid": "20759531-bc07-49d1-a5ea-ef4a17a8a9f0", 00:10:26.537 "strip_size_kb": 0, 00:10:26.537 "state": "online", 00:10:26.537 "raid_level": "raid1", 00:10:26.537 "superblock": true, 00:10:26.537 "num_base_bdevs": 3, 00:10:26.537 "num_base_bdevs_discovered": 3, 00:10:26.538 "num_base_bdevs_operational": 3, 00:10:26.538 "base_bdevs_list": [ 00:10:26.538 { 00:10:26.538 "name": "BaseBdev1", 00:10:26.538 "uuid": "4f4378e9-4f2e-54d1-acc2-3cba56d4cc02", 00:10:26.538 "is_configured": true, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 }, 00:10:26.538 { 00:10:26.538 "name": "BaseBdev2", 00:10:26.538 "uuid": "59c44682-9db8-5159-8c24-9c3351ef0213", 00:10:26.538 "is_configured": true, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 }, 00:10:26.538 { 00:10:26.538 "name": "BaseBdev3", 00:10:26.538 "uuid": "c93b1d05-f5e3-5863-8417-d23543ee0a15", 00:10:26.538 "is_configured": true, 00:10:26.538 "data_offset": 2048, 00:10:26.538 "data_size": 63488 00:10:26.538 } 00:10:26.538 ] 00:10:26.538 }' 00:10:26.538 12:41:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.538 12:41:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.104 12:41:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.104 12:41:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.104 [2024-11-06 12:41:15.618560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.038 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.038 "name": "raid_bdev1", 00:10:28.038 "uuid": "20759531-bc07-49d1-a5ea-ef4a17a8a9f0", 00:10:28.038 "strip_size_kb": 0, 00:10:28.038 "state": "online", 00:10:28.038 "raid_level": "raid1", 00:10:28.038 "superblock": true, 00:10:28.038 "num_base_bdevs": 3, 00:10:28.038 "num_base_bdevs_discovered": 3, 00:10:28.038 "num_base_bdevs_operational": 3, 00:10:28.038 "base_bdevs_list": [ 00:10:28.038 { 00:10:28.038 "name": "BaseBdev1", 00:10:28.038 "uuid": "4f4378e9-4f2e-54d1-acc2-3cba56d4cc02", 00:10:28.038 "is_configured": true, 00:10:28.038 "data_offset": 2048, 00:10:28.039 "data_size": 63488 00:10:28.039 }, 00:10:28.039 { 00:10:28.039 "name": "BaseBdev2", 00:10:28.039 "uuid": "59c44682-9db8-5159-8c24-9c3351ef0213", 00:10:28.039 "is_configured": true, 00:10:28.039 "data_offset": 2048, 00:10:28.039 "data_size": 63488 00:10:28.039 }, 00:10:28.039 { 00:10:28.039 "name": "BaseBdev3", 00:10:28.039 "uuid": "c93b1d05-f5e3-5863-8417-d23543ee0a15", 00:10:28.039 "is_configured": true, 00:10:28.039 "data_offset": 2048, 00:10:28.039 "data_size": 63488 00:10:28.039 } 00:10:28.039 ] 00:10:28.039 }' 00:10:28.039 12:41:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.039 12:41:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.606 [2024-11-06 12:41:17.015535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.606 [2024-11-06 12:41:17.015593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.606 [2024-11-06 12:41:17.019081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.606 [2024-11-06 12:41:17.019152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.606 [2024-11-06 12:41:17.019370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.606 [2024-11-06 12:41:17.019402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:28.606 { 00:10:28.606 "results": [ 00:10:28.606 { 00:10:28.606 "job": "raid_bdev1", 00:10:28.606 "core_mask": "0x1", 00:10:28.606 "workload": "randrw", 00:10:28.606 "percentage": 50, 00:10:28.606 "status": "finished", 00:10:28.606 "queue_depth": 1, 00:10:28.606 "io_size": 131072, 00:10:28.606 "runtime": 1.394482, 00:10:28.606 "iops": 8839.841604265957, 00:10:28.606 "mibps": 1104.9802005332447, 00:10:28.606 "io_failed": 0, 00:10:28.606 "io_timeout": 0, 00:10:28.606 "avg_latency_us": 108.21566140843824, 00:10:28.606 "min_latency_us": 43.054545454545455, 00:10:28.606 "max_latency_us": 1921.3963636363637 00:10:28.606 } 00:10:28.606 ], 00:10:28.606 "core_count": 1 00:10:28.606 } 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69183 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69183 ']' 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69183 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69183 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:28.606 killing process with pid 69183 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69183' 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69183 00:10:28.606 [2024-11-06 12:41:17.051828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.606 12:41:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69183 00:10:28.606 [2024-11-06 12:41:17.260511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ypdK4hCPza 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:29.985 00:10:29.985 real 0m4.783s 00:10:29.985 user 0m5.896s 00:10:29.985 sys 0m0.659s 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.985 12:41:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.985 ************************************ 00:10:29.985 END TEST raid_read_error_test 00:10:29.985 ************************************ 00:10:29.985 12:41:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:29.985 12:41:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:29.985 12:41:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.985 12:41:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.985 ************************************ 00:10:29.985 START TEST raid_write_error_test 00:10:29.985 ************************************ 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IRbFb4Vxeg 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69329 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69329 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69329 ']' 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.985 12:41:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.985 [2024-11-06 12:41:18.554515] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:10:29.985 [2024-11-06 12:41:18.554706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69329 ] 00:10:30.244 [2024-11-06 12:41:18.751648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.503 [2024-11-06 12:41:18.905178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.503 [2024-11-06 12:41:19.109161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.503 [2024-11-06 12:41:19.109252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 BaseBdev1_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 true 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-06 12:41:19.568047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.082 [2024-11-06 12:41:19.568135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.082 [2024-11-06 12:41:19.568167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:31.082 [2024-11-06 12:41:19.568184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.082 [2024-11-06 12:41:19.571357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.082 [2024-11-06 12:41:19.571409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.082 BaseBdev1 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 BaseBdev2_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 true 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-06 12:41:19.628869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.082 [2024-11-06 12:41:19.628961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.082 [2024-11-06 12:41:19.628991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:31.082 [2024-11-06 12:41:19.629009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.082 [2024-11-06 12:41:19.632000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.082 [2024-11-06 12:41:19.632067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.082 BaseBdev2 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 BaseBdev3_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 true 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-06 12:41:19.699340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.082 [2024-11-06 12:41:19.699413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.082 [2024-11-06 12:41:19.699442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:31.082 [2024-11-06 12:41:19.699461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.082 [2024-11-06 12:41:19.702352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.082 [2024-11-06 12:41:19.702415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.082 BaseBdev3 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.082 [2024-11-06 12:41:19.707454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.082 [2024-11-06 12:41:19.710102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.082 [2024-11-06 12:41:19.710248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.082 [2024-11-06 12:41:19.710555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:31.082 [2024-11-06 12:41:19.710614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.082 [2024-11-06 12:41:19.710955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:31.082 [2024-11-06 12:41:19.711256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:31.082 [2024-11-06 12:41:19.711297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:31.082 [2024-11-06 12:41:19.711554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.082 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.083 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.344 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.344 "name": "raid_bdev1", 00:10:31.344 "uuid": "e91025c5-fb0e-4636-95ac-e04f9c790a46", 00:10:31.344 "strip_size_kb": 0, 00:10:31.344 "state": "online", 00:10:31.344 "raid_level": "raid1", 00:10:31.344 "superblock": true, 00:10:31.344 "num_base_bdevs": 3, 00:10:31.344 "num_base_bdevs_discovered": 3, 00:10:31.344 "num_base_bdevs_operational": 3, 00:10:31.344 "base_bdevs_list": [ 00:10:31.344 { 00:10:31.344 "name": "BaseBdev1", 00:10:31.344 "uuid": "702631a3-f3a6-5cdf-938a-33019fe68680", 00:10:31.344 "is_configured": true, 00:10:31.344 "data_offset": 2048, 00:10:31.344 "data_size": 63488 00:10:31.344 }, 00:10:31.344 { 00:10:31.344 "name": "BaseBdev2", 00:10:31.344 "uuid": "15f51a71-84b2-5609-8a7b-cc09aaa14367", 00:10:31.344 "is_configured": true, 00:10:31.344 "data_offset": 2048, 00:10:31.344 "data_size": 63488 00:10:31.344 }, 00:10:31.344 { 00:10:31.344 "name": "BaseBdev3", 00:10:31.344 "uuid": "210c6448-7672-5837-8d7c-b24963bb4208", 00:10:31.344 "is_configured": true, 00:10:31.344 "data_offset": 2048, 00:10:31.344 "data_size": 63488 00:10:31.344 } 00:10:31.344 ] 00:10:31.344 }' 00:10:31.344 12:41:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.344 12:41:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.602 12:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.602 12:41:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.860 [2024-11-06 12:41:20.369302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 [2024-11-06 12:41:21.232664] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:32.795 [2024-11-06 12:41:21.232745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.795 [2024-11-06 12:41:21.233055] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.795 "name": "raid_bdev1", 00:10:32.795 "uuid": "e91025c5-fb0e-4636-95ac-e04f9c790a46", 00:10:32.795 "strip_size_kb": 0, 00:10:32.795 "state": "online", 00:10:32.795 "raid_level": "raid1", 00:10:32.795 "superblock": true, 00:10:32.795 "num_base_bdevs": 3, 00:10:32.795 "num_base_bdevs_discovered": 2, 00:10:32.795 "num_base_bdevs_operational": 2, 00:10:32.795 "base_bdevs_list": [ 00:10:32.795 { 00:10:32.795 "name": null, 00:10:32.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.795 "is_configured": false, 00:10:32.795 "data_offset": 0, 00:10:32.795 "data_size": 63488 00:10:32.795 }, 00:10:32.795 { 00:10:32.795 "name": "BaseBdev2", 00:10:32.795 "uuid": "15f51a71-84b2-5609-8a7b-cc09aaa14367", 00:10:32.795 "is_configured": true, 00:10:32.795 "data_offset": 2048, 00:10:32.795 "data_size": 63488 00:10:32.795 }, 00:10:32.795 { 00:10:32.795 "name": "BaseBdev3", 00:10:32.795 "uuid": "210c6448-7672-5837-8d7c-b24963bb4208", 00:10:32.795 "is_configured": true, 00:10:32.795 "data_offset": 2048, 00:10:32.795 "data_size": 63488 00:10:32.795 } 00:10:32.795 ] 00:10:32.795 }' 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.795 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.363 [2024-11-06 12:41:21.746516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.363 [2024-11-06 12:41:21.746563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.363 [2024-11-06 12:41:21.749874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.363 [2024-11-06 12:41:21.749969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.363 [2024-11-06 12:41:21.750089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.363 [2024-11-06 12:41:21.750114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:33.363 { 00:10:33.363 "results": [ 00:10:33.363 { 00:10:33.363 "job": "raid_bdev1", 00:10:33.363 "core_mask": "0x1", 00:10:33.363 "workload": "randrw", 00:10:33.363 "percentage": 50, 00:10:33.363 "status": "finished", 00:10:33.363 "queue_depth": 1, 00:10:33.363 "io_size": 131072, 00:10:33.363 "runtime": 1.374432, 00:10:33.363 "iops": 9651.25957486438, 00:10:33.363 "mibps": 1206.4074468580475, 00:10:33.363 "io_failed": 0, 00:10:33.363 "io_timeout": 0, 00:10:33.363 "avg_latency_us": 99.42342432237947, 00:10:33.363 "min_latency_us": 40.02909090909091, 00:10:33.363 "max_latency_us": 1668.189090909091 00:10:33.363 } 00:10:33.363 ], 00:10:33.363 "core_count": 1 00:10:33.363 } 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69329 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69329 ']' 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69329 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69329 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:33.363 killing process with pid 69329 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69329' 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69329 00:10:33.363 [2024-11-06 12:41:21.785925] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.363 12:41:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69329 00:10:33.363 [2024-11-06 12:41:22.012125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IRbFb4Vxeg 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.779 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:34.780 12:41:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:34.780 00:10:34.780 real 0m4.784s 00:10:34.780 user 0m5.824s 00:10:34.780 sys 0m0.651s 00:10:34.780 12:41:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.780 ************************************ 00:10:34.780 END TEST raid_write_error_test 00:10:34.780 ************************************ 00:10:34.780 12:41:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.780 12:41:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:34.780 12:41:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:34.780 12:41:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:34.780 12:41:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:34.780 12:41:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:34.780 12:41:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.780 ************************************ 00:10:34.780 START TEST raid_state_function_test 00:10:34.780 ************************************ 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69478 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69478' 00:10:34.780 Process raid pid: 69478 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69478 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69478 ']' 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:34.780 12:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.780 [2024-11-06 12:41:23.368526] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:10:34.780 [2024-11-06 12:41:23.368701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.039 [2024-11-06 12:41:23.546843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.297 [2024-11-06 12:41:23.697310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.297 [2024-11-06 12:41:23.923578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.297 [2024-11-06 12:41:23.923643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.864 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:35.864 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:35.864 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.864 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.864 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.865 [2024-11-06 12:41:24.410513] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.865 [2024-11-06 12:41:24.410584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.865 [2024-11-06 12:41:24.410602] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.865 [2024-11-06 12:41:24.410619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.865 [2024-11-06 12:41:24.410629] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.865 [2024-11-06 12:41:24.410644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.865 [2024-11-06 12:41:24.410654] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.865 [2024-11-06 12:41:24.410668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.865 "name": "Existed_Raid", 00:10:35.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.865 "strip_size_kb": 64, 00:10:35.865 "state": "configuring", 00:10:35.865 "raid_level": "raid0", 00:10:35.865 "superblock": false, 00:10:35.865 "num_base_bdevs": 4, 00:10:35.865 "num_base_bdevs_discovered": 0, 00:10:35.865 "num_base_bdevs_operational": 4, 00:10:35.865 "base_bdevs_list": [ 00:10:35.865 { 00:10:35.865 "name": "BaseBdev1", 00:10:35.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.865 "is_configured": false, 00:10:35.865 "data_offset": 0, 00:10:35.865 "data_size": 0 00:10:35.865 }, 00:10:35.865 { 00:10:35.865 "name": "BaseBdev2", 00:10:35.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.865 "is_configured": false, 00:10:35.865 "data_offset": 0, 00:10:35.865 "data_size": 0 00:10:35.865 }, 00:10:35.865 { 00:10:35.865 "name": "BaseBdev3", 00:10:35.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.865 "is_configured": false, 00:10:35.865 "data_offset": 0, 00:10:35.865 "data_size": 0 00:10:35.865 }, 00:10:35.865 { 00:10:35.865 "name": "BaseBdev4", 00:10:35.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.865 "is_configured": false, 00:10:35.865 "data_offset": 0, 00:10:35.865 "data_size": 0 00:10:35.865 } 00:10:35.865 ] 00:10:35.865 }' 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.865 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 [2024-11-06 12:41:24.914647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.432 [2024-11-06 12:41:24.914717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 [2024-11-06 12:41:24.922613] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.432 [2024-11-06 12:41:24.922671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.432 [2024-11-06 12:41:24.922686] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.432 [2024-11-06 12:41:24.922702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.432 [2024-11-06 12:41:24.922712] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.432 [2024-11-06 12:41:24.922727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.432 [2024-11-06 12:41:24.922736] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.432 [2024-11-06 12:41:24.922751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 [2024-11-06 12:41:24.973012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.432 BaseBdev1 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.432 12:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 [ 00:10:36.432 { 00:10:36.432 "name": "BaseBdev1", 00:10:36.432 "aliases": [ 00:10:36.432 "49880ea5-064e-4592-b029-f878edbc55dc" 00:10:36.432 ], 00:10:36.432 "product_name": "Malloc disk", 00:10:36.432 "block_size": 512, 00:10:36.432 "num_blocks": 65536, 00:10:36.432 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:36.432 "assigned_rate_limits": { 00:10:36.432 "rw_ios_per_sec": 0, 00:10:36.432 "rw_mbytes_per_sec": 0, 00:10:36.432 "r_mbytes_per_sec": 0, 00:10:36.432 "w_mbytes_per_sec": 0 00:10:36.432 }, 00:10:36.432 "claimed": true, 00:10:36.432 "claim_type": "exclusive_write", 00:10:36.432 "zoned": false, 00:10:36.432 "supported_io_types": { 00:10:36.432 "read": true, 00:10:36.432 "write": true, 00:10:36.432 "unmap": true, 00:10:36.432 "flush": true, 00:10:36.432 "reset": true, 00:10:36.432 "nvme_admin": false, 00:10:36.432 "nvme_io": false, 00:10:36.432 "nvme_io_md": false, 00:10:36.432 "write_zeroes": true, 00:10:36.432 "zcopy": true, 00:10:36.432 "get_zone_info": false, 00:10:36.432 "zone_management": false, 00:10:36.432 "zone_append": false, 00:10:36.432 "compare": false, 00:10:36.432 "compare_and_write": false, 00:10:36.432 "abort": true, 00:10:36.432 "seek_hole": false, 00:10:36.432 "seek_data": false, 00:10:36.432 "copy": true, 00:10:36.432 "nvme_iov_md": false 00:10:36.432 }, 00:10:36.432 "memory_domains": [ 00:10:36.432 { 00:10:36.432 "dma_device_id": "system", 00:10:36.432 "dma_device_type": 1 00:10:36.432 }, 00:10:36.432 { 00:10:36.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.432 "dma_device_type": 2 00:10:36.432 } 00:10:36.432 ], 00:10:36.432 "driver_specific": {} 00:10:36.432 } 00:10:36.432 ] 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.432 "name": "Existed_Raid", 00:10:36.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.432 "strip_size_kb": 64, 00:10:36.432 "state": "configuring", 00:10:36.432 "raid_level": "raid0", 00:10:36.432 "superblock": false, 00:10:36.432 "num_base_bdevs": 4, 00:10:36.432 "num_base_bdevs_discovered": 1, 00:10:36.432 "num_base_bdevs_operational": 4, 00:10:36.432 "base_bdevs_list": [ 00:10:36.432 { 00:10:36.432 "name": "BaseBdev1", 00:10:36.432 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:36.432 "is_configured": true, 00:10:36.432 "data_offset": 0, 00:10:36.432 "data_size": 65536 00:10:36.432 }, 00:10:36.432 { 00:10:36.432 "name": "BaseBdev2", 00:10:36.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.432 "is_configured": false, 00:10:36.432 "data_offset": 0, 00:10:36.432 "data_size": 0 00:10:36.432 }, 00:10:36.432 { 00:10:36.432 "name": "BaseBdev3", 00:10:36.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.432 "is_configured": false, 00:10:36.432 "data_offset": 0, 00:10:36.432 "data_size": 0 00:10:36.432 }, 00:10:36.432 { 00:10:36.432 "name": "BaseBdev4", 00:10:36.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.432 "is_configured": false, 00:10:36.432 "data_offset": 0, 00:10:36.432 "data_size": 0 00:10:36.432 } 00:10:36.432 ] 00:10:36.432 }' 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.432 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.999 [2024-11-06 12:41:25.509288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.999 [2024-11-06 12:41:25.509367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.999 [2024-11-06 12:41:25.517390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.999 [2024-11-06 12:41:25.520251] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.999 [2024-11-06 12:41:25.520322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.999 [2024-11-06 12:41:25.520340] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.999 [2024-11-06 12:41:25.520358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.999 [2024-11-06 12:41:25.520368] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.999 [2024-11-06 12:41:25.520382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.999 "name": "Existed_Raid", 00:10:36.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.999 "strip_size_kb": 64, 00:10:36.999 "state": "configuring", 00:10:36.999 "raid_level": "raid0", 00:10:36.999 "superblock": false, 00:10:36.999 "num_base_bdevs": 4, 00:10:36.999 "num_base_bdevs_discovered": 1, 00:10:36.999 "num_base_bdevs_operational": 4, 00:10:36.999 "base_bdevs_list": [ 00:10:36.999 { 00:10:36.999 "name": "BaseBdev1", 00:10:36.999 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:36.999 "is_configured": true, 00:10:36.999 "data_offset": 0, 00:10:36.999 "data_size": 65536 00:10:36.999 }, 00:10:36.999 { 00:10:36.999 "name": "BaseBdev2", 00:10:36.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.999 "is_configured": false, 00:10:36.999 "data_offset": 0, 00:10:36.999 "data_size": 0 00:10:36.999 }, 00:10:36.999 { 00:10:36.999 "name": "BaseBdev3", 00:10:36.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.999 "is_configured": false, 00:10:36.999 "data_offset": 0, 00:10:36.999 "data_size": 0 00:10:36.999 }, 00:10:36.999 { 00:10:36.999 "name": "BaseBdev4", 00:10:36.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.999 "is_configured": false, 00:10:36.999 "data_offset": 0, 00:10:36.999 "data_size": 0 00:10:36.999 } 00:10:36.999 ] 00:10:36.999 }' 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.999 12:41:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.566 [2024-11-06 12:41:26.080459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.566 BaseBdev2 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.566 [ 00:10:37.566 { 00:10:37.566 "name": "BaseBdev2", 00:10:37.566 "aliases": [ 00:10:37.566 "2fe361d0-311c-49e9-8799-abcd58d5f098" 00:10:37.566 ], 00:10:37.566 "product_name": "Malloc disk", 00:10:37.566 "block_size": 512, 00:10:37.566 "num_blocks": 65536, 00:10:37.566 "uuid": "2fe361d0-311c-49e9-8799-abcd58d5f098", 00:10:37.566 "assigned_rate_limits": { 00:10:37.566 "rw_ios_per_sec": 0, 00:10:37.566 "rw_mbytes_per_sec": 0, 00:10:37.566 "r_mbytes_per_sec": 0, 00:10:37.566 "w_mbytes_per_sec": 0 00:10:37.566 }, 00:10:37.566 "claimed": true, 00:10:37.566 "claim_type": "exclusive_write", 00:10:37.566 "zoned": false, 00:10:37.566 "supported_io_types": { 00:10:37.566 "read": true, 00:10:37.566 "write": true, 00:10:37.566 "unmap": true, 00:10:37.566 "flush": true, 00:10:37.566 "reset": true, 00:10:37.566 "nvme_admin": false, 00:10:37.566 "nvme_io": false, 00:10:37.566 "nvme_io_md": false, 00:10:37.566 "write_zeroes": true, 00:10:37.566 "zcopy": true, 00:10:37.566 "get_zone_info": false, 00:10:37.566 "zone_management": false, 00:10:37.566 "zone_append": false, 00:10:37.566 "compare": false, 00:10:37.566 "compare_and_write": false, 00:10:37.566 "abort": true, 00:10:37.566 "seek_hole": false, 00:10:37.566 "seek_data": false, 00:10:37.566 "copy": true, 00:10:37.566 "nvme_iov_md": false 00:10:37.566 }, 00:10:37.566 "memory_domains": [ 00:10:37.566 { 00:10:37.566 "dma_device_id": "system", 00:10:37.566 "dma_device_type": 1 00:10:37.566 }, 00:10:37.566 { 00:10:37.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.566 "dma_device_type": 2 00:10:37.566 } 00:10:37.566 ], 00:10:37.566 "driver_specific": {} 00:10:37.566 } 00:10:37.566 ] 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.566 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.566 "name": "Existed_Raid", 00:10:37.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.566 "strip_size_kb": 64, 00:10:37.566 "state": "configuring", 00:10:37.566 "raid_level": "raid0", 00:10:37.566 "superblock": false, 00:10:37.566 "num_base_bdevs": 4, 00:10:37.566 "num_base_bdevs_discovered": 2, 00:10:37.566 "num_base_bdevs_operational": 4, 00:10:37.566 "base_bdevs_list": [ 00:10:37.566 { 00:10:37.566 "name": "BaseBdev1", 00:10:37.566 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:37.566 "is_configured": true, 00:10:37.566 "data_offset": 0, 00:10:37.566 "data_size": 65536 00:10:37.566 }, 00:10:37.566 { 00:10:37.566 "name": "BaseBdev2", 00:10:37.566 "uuid": "2fe361d0-311c-49e9-8799-abcd58d5f098", 00:10:37.566 "is_configured": true, 00:10:37.567 "data_offset": 0, 00:10:37.567 "data_size": 65536 00:10:37.567 }, 00:10:37.567 { 00:10:37.567 "name": "BaseBdev3", 00:10:37.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.567 "is_configured": false, 00:10:37.567 "data_offset": 0, 00:10:37.567 "data_size": 0 00:10:37.567 }, 00:10:37.567 { 00:10:37.567 "name": "BaseBdev4", 00:10:37.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.567 "is_configured": false, 00:10:37.567 "data_offset": 0, 00:10:37.567 "data_size": 0 00:10:37.567 } 00:10:37.567 ] 00:10:37.567 }' 00:10:37.567 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.567 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.137 [2024-11-06 12:41:26.685260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.137 BaseBdev3 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.137 [ 00:10:38.137 { 00:10:38.137 "name": "BaseBdev3", 00:10:38.137 "aliases": [ 00:10:38.137 "bf857eab-a4a6-4abd-9719-a0372404c20a" 00:10:38.137 ], 00:10:38.137 "product_name": "Malloc disk", 00:10:38.137 "block_size": 512, 00:10:38.137 "num_blocks": 65536, 00:10:38.137 "uuid": "bf857eab-a4a6-4abd-9719-a0372404c20a", 00:10:38.137 "assigned_rate_limits": { 00:10:38.137 "rw_ios_per_sec": 0, 00:10:38.137 "rw_mbytes_per_sec": 0, 00:10:38.137 "r_mbytes_per_sec": 0, 00:10:38.137 "w_mbytes_per_sec": 0 00:10:38.137 }, 00:10:38.137 "claimed": true, 00:10:38.137 "claim_type": "exclusive_write", 00:10:38.137 "zoned": false, 00:10:38.137 "supported_io_types": { 00:10:38.137 "read": true, 00:10:38.137 "write": true, 00:10:38.137 "unmap": true, 00:10:38.137 "flush": true, 00:10:38.137 "reset": true, 00:10:38.137 "nvme_admin": false, 00:10:38.137 "nvme_io": false, 00:10:38.137 "nvme_io_md": false, 00:10:38.137 "write_zeroes": true, 00:10:38.137 "zcopy": true, 00:10:38.137 "get_zone_info": false, 00:10:38.137 "zone_management": false, 00:10:38.137 "zone_append": false, 00:10:38.137 "compare": false, 00:10:38.137 "compare_and_write": false, 00:10:38.137 "abort": true, 00:10:38.137 "seek_hole": false, 00:10:38.137 "seek_data": false, 00:10:38.137 "copy": true, 00:10:38.137 "nvme_iov_md": false 00:10:38.137 }, 00:10:38.137 "memory_domains": [ 00:10:38.137 { 00:10:38.137 "dma_device_id": "system", 00:10:38.137 "dma_device_type": 1 00:10:38.137 }, 00:10:38.137 { 00:10:38.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.137 "dma_device_type": 2 00:10:38.137 } 00:10:38.137 ], 00:10:38.137 "driver_specific": {} 00:10:38.137 } 00:10:38.137 ] 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.137 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.138 "name": "Existed_Raid", 00:10:38.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.138 "strip_size_kb": 64, 00:10:38.138 "state": "configuring", 00:10:38.138 "raid_level": "raid0", 00:10:38.138 "superblock": false, 00:10:38.138 "num_base_bdevs": 4, 00:10:38.138 "num_base_bdevs_discovered": 3, 00:10:38.138 "num_base_bdevs_operational": 4, 00:10:38.138 "base_bdevs_list": [ 00:10:38.138 { 00:10:38.138 "name": "BaseBdev1", 00:10:38.138 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:38.138 "is_configured": true, 00:10:38.138 "data_offset": 0, 00:10:38.138 "data_size": 65536 00:10:38.138 }, 00:10:38.138 { 00:10:38.138 "name": "BaseBdev2", 00:10:38.138 "uuid": "2fe361d0-311c-49e9-8799-abcd58d5f098", 00:10:38.138 "is_configured": true, 00:10:38.138 "data_offset": 0, 00:10:38.138 "data_size": 65536 00:10:38.138 }, 00:10:38.138 { 00:10:38.138 "name": "BaseBdev3", 00:10:38.138 "uuid": "bf857eab-a4a6-4abd-9719-a0372404c20a", 00:10:38.138 "is_configured": true, 00:10:38.138 "data_offset": 0, 00:10:38.138 "data_size": 65536 00:10:38.138 }, 00:10:38.138 { 00:10:38.138 "name": "BaseBdev4", 00:10:38.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.138 "is_configured": false, 00:10:38.138 "data_offset": 0, 00:10:38.138 "data_size": 0 00:10:38.138 } 00:10:38.138 ] 00:10:38.138 }' 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.138 12:41:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.716 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.716 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.716 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.975 [2024-11-06 12:41:27.387809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.975 [2024-11-06 12:41:27.387890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.975 [2024-11-06 12:41:27.387913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:38.975 [2024-11-06 12:41:27.388358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.975 [2024-11-06 12:41:27.388633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.975 [2024-11-06 12:41:27.388680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:38.975 [2024-11-06 12:41:27.389014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.975 BaseBdev4 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.975 [ 00:10:38.975 { 00:10:38.975 "name": "BaseBdev4", 00:10:38.975 "aliases": [ 00:10:38.975 "7f642ae9-972d-4a97-84d0-3036bb0f2e86" 00:10:38.975 ], 00:10:38.975 "product_name": "Malloc disk", 00:10:38.975 "block_size": 512, 00:10:38.975 "num_blocks": 65536, 00:10:38.975 "uuid": "7f642ae9-972d-4a97-84d0-3036bb0f2e86", 00:10:38.975 "assigned_rate_limits": { 00:10:38.975 "rw_ios_per_sec": 0, 00:10:38.975 "rw_mbytes_per_sec": 0, 00:10:38.975 "r_mbytes_per_sec": 0, 00:10:38.975 "w_mbytes_per_sec": 0 00:10:38.975 }, 00:10:38.975 "claimed": true, 00:10:38.975 "claim_type": "exclusive_write", 00:10:38.975 "zoned": false, 00:10:38.975 "supported_io_types": { 00:10:38.975 "read": true, 00:10:38.975 "write": true, 00:10:38.975 "unmap": true, 00:10:38.975 "flush": true, 00:10:38.975 "reset": true, 00:10:38.975 "nvme_admin": false, 00:10:38.975 "nvme_io": false, 00:10:38.975 "nvme_io_md": false, 00:10:38.975 "write_zeroes": true, 00:10:38.975 "zcopy": true, 00:10:38.975 "get_zone_info": false, 00:10:38.975 "zone_management": false, 00:10:38.975 "zone_append": false, 00:10:38.975 "compare": false, 00:10:38.975 "compare_and_write": false, 00:10:38.975 "abort": true, 00:10:38.975 "seek_hole": false, 00:10:38.975 "seek_data": false, 00:10:38.975 "copy": true, 00:10:38.975 "nvme_iov_md": false 00:10:38.975 }, 00:10:38.975 "memory_domains": [ 00:10:38.975 { 00:10:38.975 "dma_device_id": "system", 00:10:38.975 "dma_device_type": 1 00:10:38.975 }, 00:10:38.975 { 00:10:38.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.975 "dma_device_type": 2 00:10:38.975 } 00:10:38.975 ], 00:10:38.975 "driver_specific": {} 00:10:38.975 } 00:10:38.975 ] 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.975 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.976 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.976 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.976 "name": "Existed_Raid", 00:10:38.976 "uuid": "ecabf45c-cec6-4589-9581-5699e7f72aec", 00:10:38.976 "strip_size_kb": 64, 00:10:38.976 "state": "online", 00:10:38.976 "raid_level": "raid0", 00:10:38.976 "superblock": false, 00:10:38.976 "num_base_bdevs": 4, 00:10:38.976 "num_base_bdevs_discovered": 4, 00:10:38.976 "num_base_bdevs_operational": 4, 00:10:38.976 "base_bdevs_list": [ 00:10:38.976 { 00:10:38.976 "name": "BaseBdev1", 00:10:38.976 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:38.976 "is_configured": true, 00:10:38.976 "data_offset": 0, 00:10:38.976 "data_size": 65536 00:10:38.976 }, 00:10:38.976 { 00:10:38.976 "name": "BaseBdev2", 00:10:38.976 "uuid": "2fe361d0-311c-49e9-8799-abcd58d5f098", 00:10:38.976 "is_configured": true, 00:10:38.976 "data_offset": 0, 00:10:38.976 "data_size": 65536 00:10:38.976 }, 00:10:38.976 { 00:10:38.976 "name": "BaseBdev3", 00:10:38.976 "uuid": "bf857eab-a4a6-4abd-9719-a0372404c20a", 00:10:38.976 "is_configured": true, 00:10:38.976 "data_offset": 0, 00:10:38.976 "data_size": 65536 00:10:38.976 }, 00:10:38.976 { 00:10:38.976 "name": "BaseBdev4", 00:10:38.976 "uuid": "7f642ae9-972d-4a97-84d0-3036bb0f2e86", 00:10:38.976 "is_configured": true, 00:10:38.976 "data_offset": 0, 00:10:38.976 "data_size": 65536 00:10:38.976 } 00:10:38.976 ] 00:10:38.976 }' 00:10:38.976 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.976 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.542 12:41:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.542 [2024-11-06 12:41:27.984555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.542 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.542 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.542 "name": "Existed_Raid", 00:10:39.542 "aliases": [ 00:10:39.542 "ecabf45c-cec6-4589-9581-5699e7f72aec" 00:10:39.542 ], 00:10:39.542 "product_name": "Raid Volume", 00:10:39.542 "block_size": 512, 00:10:39.542 "num_blocks": 262144, 00:10:39.542 "uuid": "ecabf45c-cec6-4589-9581-5699e7f72aec", 00:10:39.542 "assigned_rate_limits": { 00:10:39.542 "rw_ios_per_sec": 0, 00:10:39.542 "rw_mbytes_per_sec": 0, 00:10:39.542 "r_mbytes_per_sec": 0, 00:10:39.542 "w_mbytes_per_sec": 0 00:10:39.542 }, 00:10:39.542 "claimed": false, 00:10:39.543 "zoned": false, 00:10:39.543 "supported_io_types": { 00:10:39.543 "read": true, 00:10:39.543 "write": true, 00:10:39.543 "unmap": true, 00:10:39.543 "flush": true, 00:10:39.543 "reset": true, 00:10:39.543 "nvme_admin": false, 00:10:39.543 "nvme_io": false, 00:10:39.543 "nvme_io_md": false, 00:10:39.543 "write_zeroes": true, 00:10:39.543 "zcopy": false, 00:10:39.543 "get_zone_info": false, 00:10:39.543 "zone_management": false, 00:10:39.543 "zone_append": false, 00:10:39.543 "compare": false, 00:10:39.543 "compare_and_write": false, 00:10:39.543 "abort": false, 00:10:39.543 "seek_hole": false, 00:10:39.543 "seek_data": false, 00:10:39.543 "copy": false, 00:10:39.543 "nvme_iov_md": false 00:10:39.543 }, 00:10:39.543 "memory_domains": [ 00:10:39.543 { 00:10:39.543 "dma_device_id": "system", 00:10:39.543 "dma_device_type": 1 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.543 "dma_device_type": 2 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "system", 00:10:39.543 "dma_device_type": 1 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.543 "dma_device_type": 2 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "system", 00:10:39.543 "dma_device_type": 1 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.543 "dma_device_type": 2 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "system", 00:10:39.543 "dma_device_type": 1 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.543 "dma_device_type": 2 00:10:39.543 } 00:10:39.543 ], 00:10:39.543 "driver_specific": { 00:10:39.543 "raid": { 00:10:39.543 "uuid": "ecabf45c-cec6-4589-9581-5699e7f72aec", 00:10:39.543 "strip_size_kb": 64, 00:10:39.543 "state": "online", 00:10:39.543 "raid_level": "raid0", 00:10:39.543 "superblock": false, 00:10:39.543 "num_base_bdevs": 4, 00:10:39.543 "num_base_bdevs_discovered": 4, 00:10:39.543 "num_base_bdevs_operational": 4, 00:10:39.543 "base_bdevs_list": [ 00:10:39.543 { 00:10:39.543 "name": "BaseBdev1", 00:10:39.543 "uuid": "49880ea5-064e-4592-b029-f878edbc55dc", 00:10:39.543 "is_configured": true, 00:10:39.543 "data_offset": 0, 00:10:39.543 "data_size": 65536 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "name": "BaseBdev2", 00:10:39.543 "uuid": "2fe361d0-311c-49e9-8799-abcd58d5f098", 00:10:39.543 "is_configured": true, 00:10:39.543 "data_offset": 0, 00:10:39.543 "data_size": 65536 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "name": "BaseBdev3", 00:10:39.543 "uuid": "bf857eab-a4a6-4abd-9719-a0372404c20a", 00:10:39.543 "is_configured": true, 00:10:39.543 "data_offset": 0, 00:10:39.543 "data_size": 65536 00:10:39.543 }, 00:10:39.543 { 00:10:39.543 "name": "BaseBdev4", 00:10:39.543 "uuid": "7f642ae9-972d-4a97-84d0-3036bb0f2e86", 00:10:39.543 "is_configured": true, 00:10:39.543 "data_offset": 0, 00:10:39.543 "data_size": 65536 00:10:39.543 } 00:10:39.543 ] 00:10:39.543 } 00:10:39.543 } 00:10:39.543 }' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:39.543 BaseBdev2 00:10:39.543 BaseBdev3 00:10:39.543 BaseBdev4' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.543 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.801 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.801 [2024-11-06 12:41:28.376303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.801 [2024-11-06 12:41:28.376359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.801 [2024-11-06 12:41:28.376434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.059 "name": "Existed_Raid", 00:10:40.059 "uuid": "ecabf45c-cec6-4589-9581-5699e7f72aec", 00:10:40.059 "strip_size_kb": 64, 00:10:40.059 "state": "offline", 00:10:40.059 "raid_level": "raid0", 00:10:40.059 "superblock": false, 00:10:40.059 "num_base_bdevs": 4, 00:10:40.059 "num_base_bdevs_discovered": 3, 00:10:40.059 "num_base_bdevs_operational": 3, 00:10:40.059 "base_bdevs_list": [ 00:10:40.059 { 00:10:40.059 "name": null, 00:10:40.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.059 "is_configured": false, 00:10:40.059 "data_offset": 0, 00:10:40.059 "data_size": 65536 00:10:40.059 }, 00:10:40.059 { 00:10:40.059 "name": "BaseBdev2", 00:10:40.059 "uuid": "2fe361d0-311c-49e9-8799-abcd58d5f098", 00:10:40.059 "is_configured": true, 00:10:40.059 "data_offset": 0, 00:10:40.059 "data_size": 65536 00:10:40.059 }, 00:10:40.059 { 00:10:40.059 "name": "BaseBdev3", 00:10:40.059 "uuid": "bf857eab-a4a6-4abd-9719-a0372404c20a", 00:10:40.059 "is_configured": true, 00:10:40.059 "data_offset": 0, 00:10:40.059 "data_size": 65536 00:10:40.059 }, 00:10:40.059 { 00:10:40.059 "name": "BaseBdev4", 00:10:40.059 "uuid": "7f642ae9-972d-4a97-84d0-3036bb0f2e86", 00:10:40.059 "is_configured": true, 00:10:40.059 "data_offset": 0, 00:10:40.059 "data_size": 65536 00:10:40.059 } 00:10:40.059 ] 00:10:40.059 }' 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.059 12:41:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.626 [2024-11-06 12:41:29.096522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.626 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.626 [2024-11-06 12:41:29.256396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.885 [2024-11-06 12:41:29.412412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:40.885 [2024-11-06 12:41:29.412497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.885 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 BaseBdev2 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 [ 00:10:41.143 { 00:10:41.143 "name": "BaseBdev2", 00:10:41.143 "aliases": [ 00:10:41.143 "4fb506c2-134b-43eb-adcb-69e8fa870b2c" 00:10:41.143 ], 00:10:41.143 "product_name": "Malloc disk", 00:10:41.143 "block_size": 512, 00:10:41.143 "num_blocks": 65536, 00:10:41.143 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:41.143 "assigned_rate_limits": { 00:10:41.143 "rw_ios_per_sec": 0, 00:10:41.143 "rw_mbytes_per_sec": 0, 00:10:41.143 "r_mbytes_per_sec": 0, 00:10:41.143 "w_mbytes_per_sec": 0 00:10:41.143 }, 00:10:41.143 "claimed": false, 00:10:41.143 "zoned": false, 00:10:41.143 "supported_io_types": { 00:10:41.143 "read": true, 00:10:41.143 "write": true, 00:10:41.143 "unmap": true, 00:10:41.143 "flush": true, 00:10:41.143 "reset": true, 00:10:41.143 "nvme_admin": false, 00:10:41.143 "nvme_io": false, 00:10:41.143 "nvme_io_md": false, 00:10:41.143 "write_zeroes": true, 00:10:41.143 "zcopy": true, 00:10:41.143 "get_zone_info": false, 00:10:41.143 "zone_management": false, 00:10:41.143 "zone_append": false, 00:10:41.143 "compare": false, 00:10:41.143 "compare_and_write": false, 00:10:41.143 "abort": true, 00:10:41.143 "seek_hole": false, 00:10:41.143 "seek_data": false, 00:10:41.143 "copy": true, 00:10:41.143 "nvme_iov_md": false 00:10:41.143 }, 00:10:41.143 "memory_domains": [ 00:10:41.143 { 00:10:41.143 "dma_device_id": "system", 00:10:41.143 "dma_device_type": 1 00:10:41.143 }, 00:10:41.143 { 00:10:41.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.143 "dma_device_type": 2 00:10:41.143 } 00:10:41.143 ], 00:10:41.143 "driver_specific": {} 00:10:41.143 } 00:10:41.143 ] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 BaseBdev3 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 [ 00:10:41.143 { 00:10:41.143 "name": "BaseBdev3", 00:10:41.143 "aliases": [ 00:10:41.143 "13c5e217-aa12-4290-a558-bab41badc386" 00:10:41.143 ], 00:10:41.143 "product_name": "Malloc disk", 00:10:41.143 "block_size": 512, 00:10:41.143 "num_blocks": 65536, 00:10:41.143 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:41.143 "assigned_rate_limits": { 00:10:41.143 "rw_ios_per_sec": 0, 00:10:41.143 "rw_mbytes_per_sec": 0, 00:10:41.143 "r_mbytes_per_sec": 0, 00:10:41.143 "w_mbytes_per_sec": 0 00:10:41.143 }, 00:10:41.143 "claimed": false, 00:10:41.143 "zoned": false, 00:10:41.143 "supported_io_types": { 00:10:41.143 "read": true, 00:10:41.143 "write": true, 00:10:41.143 "unmap": true, 00:10:41.143 "flush": true, 00:10:41.143 "reset": true, 00:10:41.143 "nvme_admin": false, 00:10:41.143 "nvme_io": false, 00:10:41.143 "nvme_io_md": false, 00:10:41.143 "write_zeroes": true, 00:10:41.143 "zcopy": true, 00:10:41.143 "get_zone_info": false, 00:10:41.143 "zone_management": false, 00:10:41.143 "zone_append": false, 00:10:41.143 "compare": false, 00:10:41.143 "compare_and_write": false, 00:10:41.143 "abort": true, 00:10:41.143 "seek_hole": false, 00:10:41.143 "seek_data": false, 00:10:41.143 "copy": true, 00:10:41.143 "nvme_iov_md": false 00:10:41.143 }, 00:10:41.143 "memory_domains": [ 00:10:41.143 { 00:10:41.143 "dma_device_id": "system", 00:10:41.143 "dma_device_type": 1 00:10:41.143 }, 00:10:41.143 { 00:10:41.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.143 "dma_device_type": 2 00:10:41.143 } 00:10:41.143 ], 00:10:41.143 "driver_specific": {} 00:10:41.143 } 00:10:41.143 ] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 BaseBdev4 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 [ 00:10:41.143 { 00:10:41.143 "name": "BaseBdev4", 00:10:41.143 "aliases": [ 00:10:41.143 "f766dd0a-4fad-4e96-a885-c622a9004ad4" 00:10:41.143 ], 00:10:41.143 "product_name": "Malloc disk", 00:10:41.143 "block_size": 512, 00:10:41.143 "num_blocks": 65536, 00:10:41.143 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:41.143 "assigned_rate_limits": { 00:10:41.143 "rw_ios_per_sec": 0, 00:10:41.143 "rw_mbytes_per_sec": 0, 00:10:41.143 "r_mbytes_per_sec": 0, 00:10:41.143 "w_mbytes_per_sec": 0 00:10:41.143 }, 00:10:41.143 "claimed": false, 00:10:41.143 "zoned": false, 00:10:41.143 "supported_io_types": { 00:10:41.143 "read": true, 00:10:41.143 "write": true, 00:10:41.143 "unmap": true, 00:10:41.143 "flush": true, 00:10:41.143 "reset": true, 00:10:41.143 "nvme_admin": false, 00:10:41.143 "nvme_io": false, 00:10:41.143 "nvme_io_md": false, 00:10:41.143 "write_zeroes": true, 00:10:41.143 "zcopy": true, 00:10:41.143 "get_zone_info": false, 00:10:41.143 "zone_management": false, 00:10:41.143 "zone_append": false, 00:10:41.143 "compare": false, 00:10:41.143 "compare_and_write": false, 00:10:41.143 "abort": true, 00:10:41.143 "seek_hole": false, 00:10:41.143 "seek_data": false, 00:10:41.143 "copy": true, 00:10:41.143 "nvme_iov_md": false 00:10:41.143 }, 00:10:41.143 "memory_domains": [ 00:10:41.143 { 00:10:41.143 "dma_device_id": "system", 00:10:41.143 "dma_device_type": 1 00:10:41.143 }, 00:10:41.143 { 00:10:41.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.143 "dma_device_type": 2 00:10:41.143 } 00:10:41.143 ], 00:10:41.143 "driver_specific": {} 00:10:41.143 } 00:10:41.143 ] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.143 [2024-11-06 12:41:29.790730] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.143 [2024-11-06 12:41:29.790797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.143 [2024-11-06 12:41:29.790842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.143 [2024-11-06 12:41:29.793483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.143 [2024-11-06 12:41:29.793562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.143 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.401 "name": "Existed_Raid", 00:10:41.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.401 "strip_size_kb": 64, 00:10:41.401 "state": "configuring", 00:10:41.401 "raid_level": "raid0", 00:10:41.401 "superblock": false, 00:10:41.401 "num_base_bdevs": 4, 00:10:41.401 "num_base_bdevs_discovered": 3, 00:10:41.401 "num_base_bdevs_operational": 4, 00:10:41.401 "base_bdevs_list": [ 00:10:41.401 { 00:10:41.401 "name": "BaseBdev1", 00:10:41.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.401 "is_configured": false, 00:10:41.401 "data_offset": 0, 00:10:41.401 "data_size": 0 00:10:41.401 }, 00:10:41.401 { 00:10:41.401 "name": "BaseBdev2", 00:10:41.401 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:41.401 "is_configured": true, 00:10:41.401 "data_offset": 0, 00:10:41.401 "data_size": 65536 00:10:41.401 }, 00:10:41.401 { 00:10:41.401 "name": "BaseBdev3", 00:10:41.401 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:41.401 "is_configured": true, 00:10:41.401 "data_offset": 0, 00:10:41.401 "data_size": 65536 00:10:41.401 }, 00:10:41.401 { 00:10:41.401 "name": "BaseBdev4", 00:10:41.401 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:41.401 "is_configured": true, 00:10:41.401 "data_offset": 0, 00:10:41.401 "data_size": 65536 00:10:41.401 } 00:10:41.401 ] 00:10:41.401 }' 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.401 12:41:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.967 [2024-11-06 12:41:30.358950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.967 "name": "Existed_Raid", 00:10:41.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.967 "strip_size_kb": 64, 00:10:41.967 "state": "configuring", 00:10:41.967 "raid_level": "raid0", 00:10:41.967 "superblock": false, 00:10:41.967 "num_base_bdevs": 4, 00:10:41.967 "num_base_bdevs_discovered": 2, 00:10:41.967 "num_base_bdevs_operational": 4, 00:10:41.967 "base_bdevs_list": [ 00:10:41.967 { 00:10:41.967 "name": "BaseBdev1", 00:10:41.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.967 "is_configured": false, 00:10:41.967 "data_offset": 0, 00:10:41.967 "data_size": 0 00:10:41.967 }, 00:10:41.967 { 00:10:41.967 "name": null, 00:10:41.967 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:41.967 "is_configured": false, 00:10:41.967 "data_offset": 0, 00:10:41.967 "data_size": 65536 00:10:41.967 }, 00:10:41.967 { 00:10:41.967 "name": "BaseBdev3", 00:10:41.967 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:41.967 "is_configured": true, 00:10:41.967 "data_offset": 0, 00:10:41.967 "data_size": 65536 00:10:41.967 }, 00:10:41.967 { 00:10:41.967 "name": "BaseBdev4", 00:10:41.967 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:41.967 "is_configured": true, 00:10:41.967 "data_offset": 0, 00:10:41.967 "data_size": 65536 00:10:41.967 } 00:10:41.967 ] 00:10:41.967 }' 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.967 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.225 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.225 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.225 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.225 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.225 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 [2024-11-06 12:41:30.986556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.485 BaseBdev1 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.485 12:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 [ 00:10:42.485 { 00:10:42.485 "name": "BaseBdev1", 00:10:42.485 "aliases": [ 00:10:42.485 "491c5d4a-d332-4125-878e-77841b77043e" 00:10:42.485 ], 00:10:42.485 "product_name": "Malloc disk", 00:10:42.485 "block_size": 512, 00:10:42.485 "num_blocks": 65536, 00:10:42.485 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:42.485 "assigned_rate_limits": { 00:10:42.485 "rw_ios_per_sec": 0, 00:10:42.485 "rw_mbytes_per_sec": 0, 00:10:42.485 "r_mbytes_per_sec": 0, 00:10:42.485 "w_mbytes_per_sec": 0 00:10:42.485 }, 00:10:42.485 "claimed": true, 00:10:42.485 "claim_type": "exclusive_write", 00:10:42.485 "zoned": false, 00:10:42.485 "supported_io_types": { 00:10:42.485 "read": true, 00:10:42.485 "write": true, 00:10:42.485 "unmap": true, 00:10:42.485 "flush": true, 00:10:42.485 "reset": true, 00:10:42.485 "nvme_admin": false, 00:10:42.485 "nvme_io": false, 00:10:42.485 "nvme_io_md": false, 00:10:42.485 "write_zeroes": true, 00:10:42.485 "zcopy": true, 00:10:42.485 "get_zone_info": false, 00:10:42.485 "zone_management": false, 00:10:42.485 "zone_append": false, 00:10:42.485 "compare": false, 00:10:42.485 "compare_and_write": false, 00:10:42.485 "abort": true, 00:10:42.485 "seek_hole": false, 00:10:42.485 "seek_data": false, 00:10:42.485 "copy": true, 00:10:42.485 "nvme_iov_md": false 00:10:42.485 }, 00:10:42.485 "memory_domains": [ 00:10:42.485 { 00:10:42.485 "dma_device_id": "system", 00:10:42.485 "dma_device_type": 1 00:10:42.485 }, 00:10:42.485 { 00:10:42.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.485 "dma_device_type": 2 00:10:42.485 } 00:10:42.485 ], 00:10:42.485 "driver_specific": {} 00:10:42.485 } 00:10:42.485 ] 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.485 "name": "Existed_Raid", 00:10:42.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.485 "strip_size_kb": 64, 00:10:42.485 "state": "configuring", 00:10:42.485 "raid_level": "raid0", 00:10:42.485 "superblock": false, 00:10:42.485 "num_base_bdevs": 4, 00:10:42.485 "num_base_bdevs_discovered": 3, 00:10:42.485 "num_base_bdevs_operational": 4, 00:10:42.485 "base_bdevs_list": [ 00:10:42.485 { 00:10:42.485 "name": "BaseBdev1", 00:10:42.485 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:42.485 "is_configured": true, 00:10:42.485 "data_offset": 0, 00:10:42.485 "data_size": 65536 00:10:42.485 }, 00:10:42.485 { 00:10:42.485 "name": null, 00:10:42.485 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:42.485 "is_configured": false, 00:10:42.485 "data_offset": 0, 00:10:42.485 "data_size": 65536 00:10:42.485 }, 00:10:42.485 { 00:10:42.485 "name": "BaseBdev3", 00:10:42.485 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:42.485 "is_configured": true, 00:10:42.485 "data_offset": 0, 00:10:42.485 "data_size": 65536 00:10:42.485 }, 00:10:42.485 { 00:10:42.485 "name": "BaseBdev4", 00:10:42.485 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:42.485 "is_configured": true, 00:10:42.485 "data_offset": 0, 00:10:42.485 "data_size": 65536 00:10:42.485 } 00:10:42.485 ] 00:10:42.485 }' 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.485 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.052 [2024-11-06 12:41:31.550879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.052 "name": "Existed_Raid", 00:10:43.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.052 "strip_size_kb": 64, 00:10:43.052 "state": "configuring", 00:10:43.052 "raid_level": "raid0", 00:10:43.052 "superblock": false, 00:10:43.052 "num_base_bdevs": 4, 00:10:43.052 "num_base_bdevs_discovered": 2, 00:10:43.052 "num_base_bdevs_operational": 4, 00:10:43.052 "base_bdevs_list": [ 00:10:43.052 { 00:10:43.052 "name": "BaseBdev1", 00:10:43.052 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:43.052 "is_configured": true, 00:10:43.052 "data_offset": 0, 00:10:43.052 "data_size": 65536 00:10:43.052 }, 00:10:43.052 { 00:10:43.052 "name": null, 00:10:43.052 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:43.052 "is_configured": false, 00:10:43.052 "data_offset": 0, 00:10:43.052 "data_size": 65536 00:10:43.052 }, 00:10:43.052 { 00:10:43.052 "name": null, 00:10:43.052 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:43.052 "is_configured": false, 00:10:43.052 "data_offset": 0, 00:10:43.052 "data_size": 65536 00:10:43.052 }, 00:10:43.052 { 00:10:43.052 "name": "BaseBdev4", 00:10:43.052 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:43.052 "is_configured": true, 00:10:43.052 "data_offset": 0, 00:10:43.052 "data_size": 65536 00:10:43.052 } 00:10:43.052 ] 00:10:43.052 }' 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.052 12:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.620 [2024-11-06 12:41:32.143086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.620 "name": "Existed_Raid", 00:10:43.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.620 "strip_size_kb": 64, 00:10:43.620 "state": "configuring", 00:10:43.620 "raid_level": "raid0", 00:10:43.620 "superblock": false, 00:10:43.620 "num_base_bdevs": 4, 00:10:43.620 "num_base_bdevs_discovered": 3, 00:10:43.620 "num_base_bdevs_operational": 4, 00:10:43.620 "base_bdevs_list": [ 00:10:43.620 { 00:10:43.620 "name": "BaseBdev1", 00:10:43.620 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:43.620 "is_configured": true, 00:10:43.620 "data_offset": 0, 00:10:43.620 "data_size": 65536 00:10:43.620 }, 00:10:43.620 { 00:10:43.620 "name": null, 00:10:43.620 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:43.620 "is_configured": false, 00:10:43.620 "data_offset": 0, 00:10:43.620 "data_size": 65536 00:10:43.620 }, 00:10:43.620 { 00:10:43.620 "name": "BaseBdev3", 00:10:43.620 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:43.620 "is_configured": true, 00:10:43.620 "data_offset": 0, 00:10:43.620 "data_size": 65536 00:10:43.620 }, 00:10:43.620 { 00:10:43.620 "name": "BaseBdev4", 00:10:43.620 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:43.620 "is_configured": true, 00:10:43.620 "data_offset": 0, 00:10:43.620 "data_size": 65536 00:10:43.620 } 00:10:43.620 ] 00:10:43.620 }' 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.620 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.187 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.188 [2024-11-06 12:41:32.743341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.188 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.447 "name": "Existed_Raid", 00:10:44.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.447 "strip_size_kb": 64, 00:10:44.447 "state": "configuring", 00:10:44.447 "raid_level": "raid0", 00:10:44.447 "superblock": false, 00:10:44.447 "num_base_bdevs": 4, 00:10:44.447 "num_base_bdevs_discovered": 2, 00:10:44.447 "num_base_bdevs_operational": 4, 00:10:44.447 "base_bdevs_list": [ 00:10:44.447 { 00:10:44.447 "name": null, 00:10:44.447 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:44.447 "is_configured": false, 00:10:44.447 "data_offset": 0, 00:10:44.447 "data_size": 65536 00:10:44.447 }, 00:10:44.447 { 00:10:44.447 "name": null, 00:10:44.447 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:44.447 "is_configured": false, 00:10:44.447 "data_offset": 0, 00:10:44.447 "data_size": 65536 00:10:44.447 }, 00:10:44.447 { 00:10:44.447 "name": "BaseBdev3", 00:10:44.447 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:44.447 "is_configured": true, 00:10:44.447 "data_offset": 0, 00:10:44.447 "data_size": 65536 00:10:44.447 }, 00:10:44.447 { 00:10:44.447 "name": "BaseBdev4", 00:10:44.447 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:44.447 "is_configured": true, 00:10:44.447 "data_offset": 0, 00:10:44.447 "data_size": 65536 00:10:44.447 } 00:10:44.447 ] 00:10:44.447 }' 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.447 12:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.014 [2024-11-06 12:41:33.445791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.014 "name": "Existed_Raid", 00:10:45.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.014 "strip_size_kb": 64, 00:10:45.014 "state": "configuring", 00:10:45.014 "raid_level": "raid0", 00:10:45.014 "superblock": false, 00:10:45.014 "num_base_bdevs": 4, 00:10:45.014 "num_base_bdevs_discovered": 3, 00:10:45.014 "num_base_bdevs_operational": 4, 00:10:45.014 "base_bdevs_list": [ 00:10:45.014 { 00:10:45.014 "name": null, 00:10:45.014 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:45.014 "is_configured": false, 00:10:45.014 "data_offset": 0, 00:10:45.014 "data_size": 65536 00:10:45.014 }, 00:10:45.014 { 00:10:45.014 "name": "BaseBdev2", 00:10:45.014 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:45.014 "is_configured": true, 00:10:45.014 "data_offset": 0, 00:10:45.014 "data_size": 65536 00:10:45.014 }, 00:10:45.014 { 00:10:45.014 "name": "BaseBdev3", 00:10:45.014 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:45.014 "is_configured": true, 00:10:45.014 "data_offset": 0, 00:10:45.014 "data_size": 65536 00:10:45.014 }, 00:10:45.014 { 00:10:45.014 "name": "BaseBdev4", 00:10:45.014 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:45.014 "is_configured": true, 00:10:45.014 "data_offset": 0, 00:10:45.014 "data_size": 65536 00:10:45.014 } 00:10:45.014 ] 00:10:45.014 }' 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.014 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.581 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.581 12:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.581 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.581 12:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 491c5d4a-d332-4125-878e-77841b77043e 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.581 [2024-11-06 12:41:34.155792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.581 [2024-11-06 12:41:34.155884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.581 [2024-11-06 12:41:34.155897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:45.581 [2024-11-06 12:41:34.156289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:45.581 [2024-11-06 12:41:34.156600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.581 [2024-11-06 12:41:34.156649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:45.581 [2024-11-06 12:41:34.157037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.581 NewBaseBdev 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.581 [ 00:10:45.581 { 00:10:45.581 "name": "NewBaseBdev", 00:10:45.581 "aliases": [ 00:10:45.581 "491c5d4a-d332-4125-878e-77841b77043e" 00:10:45.581 ], 00:10:45.581 "product_name": "Malloc disk", 00:10:45.581 "block_size": 512, 00:10:45.581 "num_blocks": 65536, 00:10:45.581 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:45.581 "assigned_rate_limits": { 00:10:45.581 "rw_ios_per_sec": 0, 00:10:45.581 "rw_mbytes_per_sec": 0, 00:10:45.581 "r_mbytes_per_sec": 0, 00:10:45.581 "w_mbytes_per_sec": 0 00:10:45.581 }, 00:10:45.581 "claimed": true, 00:10:45.581 "claim_type": "exclusive_write", 00:10:45.581 "zoned": false, 00:10:45.581 "supported_io_types": { 00:10:45.581 "read": true, 00:10:45.581 "write": true, 00:10:45.581 "unmap": true, 00:10:45.581 "flush": true, 00:10:45.581 "reset": true, 00:10:45.581 "nvme_admin": false, 00:10:45.581 "nvme_io": false, 00:10:45.581 "nvme_io_md": false, 00:10:45.581 "write_zeroes": true, 00:10:45.581 "zcopy": true, 00:10:45.581 "get_zone_info": false, 00:10:45.581 "zone_management": false, 00:10:45.581 "zone_append": false, 00:10:45.581 "compare": false, 00:10:45.581 "compare_and_write": false, 00:10:45.581 "abort": true, 00:10:45.581 "seek_hole": false, 00:10:45.581 "seek_data": false, 00:10:45.581 "copy": true, 00:10:45.581 "nvme_iov_md": false 00:10:45.581 }, 00:10:45.581 "memory_domains": [ 00:10:45.581 { 00:10:45.581 "dma_device_id": "system", 00:10:45.581 "dma_device_type": 1 00:10:45.581 }, 00:10:45.581 { 00:10:45.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.581 "dma_device_type": 2 00:10:45.581 } 00:10:45.581 ], 00:10:45.581 "driver_specific": {} 00:10:45.581 } 00:10:45.581 ] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.581 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.582 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.582 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.582 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.840 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.840 "name": "Existed_Raid", 00:10:45.840 "uuid": "08578bbe-9d25-4059-b571-f92e0aa336f2", 00:10:45.840 "strip_size_kb": 64, 00:10:45.840 "state": "online", 00:10:45.840 "raid_level": "raid0", 00:10:45.840 "superblock": false, 00:10:45.840 "num_base_bdevs": 4, 00:10:45.840 "num_base_bdevs_discovered": 4, 00:10:45.840 "num_base_bdevs_operational": 4, 00:10:45.840 "base_bdevs_list": [ 00:10:45.840 { 00:10:45.840 "name": "NewBaseBdev", 00:10:45.840 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:45.840 "is_configured": true, 00:10:45.840 "data_offset": 0, 00:10:45.840 "data_size": 65536 00:10:45.840 }, 00:10:45.840 { 00:10:45.840 "name": "BaseBdev2", 00:10:45.840 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:45.840 "is_configured": true, 00:10:45.840 "data_offset": 0, 00:10:45.840 "data_size": 65536 00:10:45.840 }, 00:10:45.840 { 00:10:45.840 "name": "BaseBdev3", 00:10:45.840 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:45.840 "is_configured": true, 00:10:45.840 "data_offset": 0, 00:10:45.840 "data_size": 65536 00:10:45.840 }, 00:10:45.840 { 00:10:45.840 "name": "BaseBdev4", 00:10:45.840 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:45.840 "is_configured": true, 00:10:45.840 "data_offset": 0, 00:10:45.840 "data_size": 65536 00:10:45.840 } 00:10:45.840 ] 00:10:45.840 }' 00:10:45.840 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.840 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.098 [2024-11-06 12:41:34.748469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.371 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.372 "name": "Existed_Raid", 00:10:46.372 "aliases": [ 00:10:46.372 "08578bbe-9d25-4059-b571-f92e0aa336f2" 00:10:46.372 ], 00:10:46.372 "product_name": "Raid Volume", 00:10:46.372 "block_size": 512, 00:10:46.372 "num_blocks": 262144, 00:10:46.372 "uuid": "08578bbe-9d25-4059-b571-f92e0aa336f2", 00:10:46.372 "assigned_rate_limits": { 00:10:46.372 "rw_ios_per_sec": 0, 00:10:46.372 "rw_mbytes_per_sec": 0, 00:10:46.372 "r_mbytes_per_sec": 0, 00:10:46.372 "w_mbytes_per_sec": 0 00:10:46.372 }, 00:10:46.372 "claimed": false, 00:10:46.372 "zoned": false, 00:10:46.372 "supported_io_types": { 00:10:46.372 "read": true, 00:10:46.372 "write": true, 00:10:46.372 "unmap": true, 00:10:46.372 "flush": true, 00:10:46.372 "reset": true, 00:10:46.372 "nvme_admin": false, 00:10:46.372 "nvme_io": false, 00:10:46.372 "nvme_io_md": false, 00:10:46.372 "write_zeroes": true, 00:10:46.372 "zcopy": false, 00:10:46.372 "get_zone_info": false, 00:10:46.372 "zone_management": false, 00:10:46.372 "zone_append": false, 00:10:46.372 "compare": false, 00:10:46.372 "compare_and_write": false, 00:10:46.372 "abort": false, 00:10:46.372 "seek_hole": false, 00:10:46.372 "seek_data": false, 00:10:46.372 "copy": false, 00:10:46.372 "nvme_iov_md": false 00:10:46.372 }, 00:10:46.372 "memory_domains": [ 00:10:46.372 { 00:10:46.372 "dma_device_id": "system", 00:10:46.372 "dma_device_type": 1 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.372 "dma_device_type": 2 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "system", 00:10:46.372 "dma_device_type": 1 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.372 "dma_device_type": 2 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "system", 00:10:46.372 "dma_device_type": 1 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.372 "dma_device_type": 2 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "system", 00:10:46.372 "dma_device_type": 1 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.372 "dma_device_type": 2 00:10:46.372 } 00:10:46.372 ], 00:10:46.372 "driver_specific": { 00:10:46.372 "raid": { 00:10:46.372 "uuid": "08578bbe-9d25-4059-b571-f92e0aa336f2", 00:10:46.372 "strip_size_kb": 64, 00:10:46.372 "state": "online", 00:10:46.372 "raid_level": "raid0", 00:10:46.372 "superblock": false, 00:10:46.372 "num_base_bdevs": 4, 00:10:46.372 "num_base_bdevs_discovered": 4, 00:10:46.372 "num_base_bdevs_operational": 4, 00:10:46.372 "base_bdevs_list": [ 00:10:46.372 { 00:10:46.372 "name": "NewBaseBdev", 00:10:46.372 "uuid": "491c5d4a-d332-4125-878e-77841b77043e", 00:10:46.372 "is_configured": true, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 65536 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "name": "BaseBdev2", 00:10:46.372 "uuid": "4fb506c2-134b-43eb-adcb-69e8fa870b2c", 00:10:46.372 "is_configured": true, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 65536 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "name": "BaseBdev3", 00:10:46.372 "uuid": "13c5e217-aa12-4290-a558-bab41badc386", 00:10:46.372 "is_configured": true, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 65536 00:10:46.372 }, 00:10:46.372 { 00:10:46.372 "name": "BaseBdev4", 00:10:46.372 "uuid": "f766dd0a-4fad-4e96-a885-c622a9004ad4", 00:10:46.372 "is_configured": true, 00:10:46.372 "data_offset": 0, 00:10:46.372 "data_size": 65536 00:10:46.372 } 00:10:46.372 ] 00:10:46.372 } 00:10:46.372 } 00:10:46.372 }' 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.372 BaseBdev2 00:10:46.372 BaseBdev3 00:10:46.372 BaseBdev4' 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.372 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.373 12:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.373 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.641 [2024-11-06 12:41:35.120103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.641 [2024-11-06 12:41:35.120157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.641 [2024-11-06 12:41:35.120322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.641 [2024-11-06 12:41:35.120434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.641 [2024-11-06 12:41:35.120459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69478 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69478 ']' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69478 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69478 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:46.641 killing process with pid 69478 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69478' 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69478 00:10:46.641 [2024-11-06 12:41:35.159713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.641 12:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69478 00:10:46.900 [2024-11-06 12:41:35.538807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.275 00:10:48.275 real 0m13.422s 00:10:48.275 user 0m22.109s 00:10:48.275 sys 0m1.939s 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.275 ************************************ 00:10:48.275 END TEST raid_state_function_test 00:10:48.275 ************************************ 00:10:48.275 12:41:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:48.275 12:41:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:48.275 12:41:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:48.275 12:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.275 ************************************ 00:10:48.275 START TEST raid_state_function_test_sb 00:10:48.275 ************************************ 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:48.275 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70166 00:10:48.276 Process raid pid: 70166 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70166' 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70166 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70166 ']' 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.276 12:41:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.276 [2024-11-06 12:41:36.855146] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:10:48.276 [2024-11-06 12:41:36.855342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.534 [2024-11-06 12:41:37.035728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.534 [2024-11-06 12:41:37.189430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.792 [2024-11-06 12:41:37.422736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.792 [2024-11-06 12:41:37.422810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.359 [2024-11-06 12:41:37.863360] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.359 [2024-11-06 12:41:37.863436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.359 [2024-11-06 12:41:37.863455] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.359 [2024-11-06 12:41:37.863484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.359 [2024-11-06 12:41:37.863497] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.359 [2024-11-06 12:41:37.863514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.359 [2024-11-06 12:41:37.863529] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.359 [2024-11-06 12:41:37.863545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.359 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.360 "name": "Existed_Raid", 00:10:49.360 "uuid": "8e433085-11f9-429c-a83d-6bd10a459337", 00:10:49.360 "strip_size_kb": 64, 00:10:49.360 "state": "configuring", 00:10:49.360 "raid_level": "raid0", 00:10:49.360 "superblock": true, 00:10:49.360 "num_base_bdevs": 4, 00:10:49.360 "num_base_bdevs_discovered": 0, 00:10:49.360 "num_base_bdevs_operational": 4, 00:10:49.360 "base_bdevs_list": [ 00:10:49.360 { 00:10:49.360 "name": "BaseBdev1", 00:10:49.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.360 "is_configured": false, 00:10:49.360 "data_offset": 0, 00:10:49.360 "data_size": 0 00:10:49.360 }, 00:10:49.360 { 00:10:49.360 "name": "BaseBdev2", 00:10:49.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.360 "is_configured": false, 00:10:49.360 "data_offset": 0, 00:10:49.360 "data_size": 0 00:10:49.360 }, 00:10:49.360 { 00:10:49.360 "name": "BaseBdev3", 00:10:49.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.360 "is_configured": false, 00:10:49.360 "data_offset": 0, 00:10:49.360 "data_size": 0 00:10:49.360 }, 00:10:49.360 { 00:10:49.360 "name": "BaseBdev4", 00:10:49.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.360 "is_configured": false, 00:10:49.360 "data_offset": 0, 00:10:49.360 "data_size": 0 00:10:49.360 } 00:10:49.360 ] 00:10:49.360 }' 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.360 12:41:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.927 [2024-11-06 12:41:38.359419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.927 [2024-11-06 12:41:38.359471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.927 [2024-11-06 12:41:38.367384] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.927 [2024-11-06 12:41:38.367438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.927 [2024-11-06 12:41:38.367455] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.927 [2024-11-06 12:41:38.367473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.927 [2024-11-06 12:41:38.367483] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.927 [2024-11-06 12:41:38.367499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.927 [2024-11-06 12:41:38.367509] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.927 [2024-11-06 12:41:38.367524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.927 [2024-11-06 12:41:38.419038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.927 BaseBdev1 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.927 [ 00:10:49.927 { 00:10:49.927 "name": "BaseBdev1", 00:10:49.927 "aliases": [ 00:10:49.927 "8d5adede-008b-4231-b346-0069ded8786d" 00:10:49.927 ], 00:10:49.927 "product_name": "Malloc disk", 00:10:49.927 "block_size": 512, 00:10:49.927 "num_blocks": 65536, 00:10:49.927 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:49.927 "assigned_rate_limits": { 00:10:49.927 "rw_ios_per_sec": 0, 00:10:49.927 "rw_mbytes_per_sec": 0, 00:10:49.927 "r_mbytes_per_sec": 0, 00:10:49.927 "w_mbytes_per_sec": 0 00:10:49.927 }, 00:10:49.927 "claimed": true, 00:10:49.927 "claim_type": "exclusive_write", 00:10:49.927 "zoned": false, 00:10:49.927 "supported_io_types": { 00:10:49.927 "read": true, 00:10:49.927 "write": true, 00:10:49.927 "unmap": true, 00:10:49.927 "flush": true, 00:10:49.927 "reset": true, 00:10:49.927 "nvme_admin": false, 00:10:49.927 "nvme_io": false, 00:10:49.927 "nvme_io_md": false, 00:10:49.927 "write_zeroes": true, 00:10:49.927 "zcopy": true, 00:10:49.927 "get_zone_info": false, 00:10:49.927 "zone_management": false, 00:10:49.927 "zone_append": false, 00:10:49.927 "compare": false, 00:10:49.927 "compare_and_write": false, 00:10:49.927 "abort": true, 00:10:49.927 "seek_hole": false, 00:10:49.927 "seek_data": false, 00:10:49.927 "copy": true, 00:10:49.927 "nvme_iov_md": false 00:10:49.927 }, 00:10:49.927 "memory_domains": [ 00:10:49.927 { 00:10:49.927 "dma_device_id": "system", 00:10:49.927 "dma_device_type": 1 00:10:49.927 }, 00:10:49.927 { 00:10:49.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.927 "dma_device_type": 2 00:10:49.927 } 00:10:49.927 ], 00:10:49.927 "driver_specific": {} 00:10:49.927 } 00:10:49.927 ] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.927 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.928 "name": "Existed_Raid", 00:10:49.928 "uuid": "d83faeb6-3fd4-496c-a597-42a548f9ff85", 00:10:49.928 "strip_size_kb": 64, 00:10:49.928 "state": "configuring", 00:10:49.928 "raid_level": "raid0", 00:10:49.928 "superblock": true, 00:10:49.928 "num_base_bdevs": 4, 00:10:49.928 "num_base_bdevs_discovered": 1, 00:10:49.928 "num_base_bdevs_operational": 4, 00:10:49.928 "base_bdevs_list": [ 00:10:49.928 { 00:10:49.928 "name": "BaseBdev1", 00:10:49.928 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:49.928 "is_configured": true, 00:10:49.928 "data_offset": 2048, 00:10:49.928 "data_size": 63488 00:10:49.928 }, 00:10:49.928 { 00:10:49.928 "name": "BaseBdev2", 00:10:49.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.928 "is_configured": false, 00:10:49.928 "data_offset": 0, 00:10:49.928 "data_size": 0 00:10:49.928 }, 00:10:49.928 { 00:10:49.928 "name": "BaseBdev3", 00:10:49.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.928 "is_configured": false, 00:10:49.928 "data_offset": 0, 00:10:49.928 "data_size": 0 00:10:49.928 }, 00:10:49.928 { 00:10:49.928 "name": "BaseBdev4", 00:10:49.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.928 "is_configured": false, 00:10:49.928 "data_offset": 0, 00:10:49.928 "data_size": 0 00:10:49.928 } 00:10:49.928 ] 00:10:49.928 }' 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.928 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.495 12:41:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.495 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.495 12:41:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.495 [2024-11-06 12:41:38.999310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.495 [2024-11-06 12:41:38.999529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.495 [2024-11-06 12:41:39.007322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.495 [2024-11-06 12:41:39.010023] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.495 [2024-11-06 12:41:39.010075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.495 [2024-11-06 12:41:39.010092] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.495 [2024-11-06 12:41:39.010108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.495 [2024-11-06 12:41:39.010119] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.495 [2024-11-06 12:41:39.010132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.495 "name": "Existed_Raid", 00:10:50.495 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:50.495 "strip_size_kb": 64, 00:10:50.495 "state": "configuring", 00:10:50.495 "raid_level": "raid0", 00:10:50.495 "superblock": true, 00:10:50.495 "num_base_bdevs": 4, 00:10:50.495 "num_base_bdevs_discovered": 1, 00:10:50.495 "num_base_bdevs_operational": 4, 00:10:50.495 "base_bdevs_list": [ 00:10:50.495 { 00:10:50.495 "name": "BaseBdev1", 00:10:50.495 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:50.495 "is_configured": true, 00:10:50.495 "data_offset": 2048, 00:10:50.495 "data_size": 63488 00:10:50.495 }, 00:10:50.495 { 00:10:50.495 "name": "BaseBdev2", 00:10:50.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.495 "is_configured": false, 00:10:50.495 "data_offset": 0, 00:10:50.495 "data_size": 0 00:10:50.495 }, 00:10:50.495 { 00:10:50.495 "name": "BaseBdev3", 00:10:50.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.495 "is_configured": false, 00:10:50.495 "data_offset": 0, 00:10:50.495 "data_size": 0 00:10:50.495 }, 00:10:50.495 { 00:10:50.495 "name": "BaseBdev4", 00:10:50.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.495 "is_configured": false, 00:10:50.495 "data_offset": 0, 00:10:50.495 "data_size": 0 00:10:50.495 } 00:10:50.495 ] 00:10:50.495 }' 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.495 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.063 [2024-11-06 12:41:39.586656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.063 BaseBdev2 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.063 [ 00:10:51.063 { 00:10:51.063 "name": "BaseBdev2", 00:10:51.063 "aliases": [ 00:10:51.063 "535d9654-56aa-40df-a407-2dcbe27cce82" 00:10:51.063 ], 00:10:51.063 "product_name": "Malloc disk", 00:10:51.063 "block_size": 512, 00:10:51.063 "num_blocks": 65536, 00:10:51.063 "uuid": "535d9654-56aa-40df-a407-2dcbe27cce82", 00:10:51.063 "assigned_rate_limits": { 00:10:51.063 "rw_ios_per_sec": 0, 00:10:51.063 "rw_mbytes_per_sec": 0, 00:10:51.063 "r_mbytes_per_sec": 0, 00:10:51.063 "w_mbytes_per_sec": 0 00:10:51.063 }, 00:10:51.063 "claimed": true, 00:10:51.063 "claim_type": "exclusive_write", 00:10:51.063 "zoned": false, 00:10:51.063 "supported_io_types": { 00:10:51.063 "read": true, 00:10:51.063 "write": true, 00:10:51.063 "unmap": true, 00:10:51.063 "flush": true, 00:10:51.063 "reset": true, 00:10:51.063 "nvme_admin": false, 00:10:51.063 "nvme_io": false, 00:10:51.063 "nvme_io_md": false, 00:10:51.063 "write_zeroes": true, 00:10:51.063 "zcopy": true, 00:10:51.063 "get_zone_info": false, 00:10:51.063 "zone_management": false, 00:10:51.063 "zone_append": false, 00:10:51.063 "compare": false, 00:10:51.063 "compare_and_write": false, 00:10:51.063 "abort": true, 00:10:51.063 "seek_hole": false, 00:10:51.063 "seek_data": false, 00:10:51.063 "copy": true, 00:10:51.063 "nvme_iov_md": false 00:10:51.063 }, 00:10:51.063 "memory_domains": [ 00:10:51.063 { 00:10:51.063 "dma_device_id": "system", 00:10:51.063 "dma_device_type": 1 00:10:51.063 }, 00:10:51.063 { 00:10:51.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.063 "dma_device_type": 2 00:10:51.063 } 00:10:51.063 ], 00:10:51.063 "driver_specific": {} 00:10:51.063 } 00:10:51.063 ] 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.063 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.063 "name": "Existed_Raid", 00:10:51.063 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:51.063 "strip_size_kb": 64, 00:10:51.063 "state": "configuring", 00:10:51.063 "raid_level": "raid0", 00:10:51.063 "superblock": true, 00:10:51.063 "num_base_bdevs": 4, 00:10:51.063 "num_base_bdevs_discovered": 2, 00:10:51.063 "num_base_bdevs_operational": 4, 00:10:51.063 "base_bdevs_list": [ 00:10:51.063 { 00:10:51.063 "name": "BaseBdev1", 00:10:51.063 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:51.063 "is_configured": true, 00:10:51.063 "data_offset": 2048, 00:10:51.063 "data_size": 63488 00:10:51.063 }, 00:10:51.063 { 00:10:51.063 "name": "BaseBdev2", 00:10:51.063 "uuid": "535d9654-56aa-40df-a407-2dcbe27cce82", 00:10:51.063 "is_configured": true, 00:10:51.063 "data_offset": 2048, 00:10:51.063 "data_size": 63488 00:10:51.063 }, 00:10:51.063 { 00:10:51.063 "name": "BaseBdev3", 00:10:51.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.063 "is_configured": false, 00:10:51.063 "data_offset": 0, 00:10:51.063 "data_size": 0 00:10:51.063 }, 00:10:51.064 { 00:10:51.064 "name": "BaseBdev4", 00:10:51.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.064 "is_configured": false, 00:10:51.064 "data_offset": 0, 00:10:51.064 "data_size": 0 00:10:51.064 } 00:10:51.064 ] 00:10:51.064 }' 00:10:51.064 12:41:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.064 12:41:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.630 [2024-11-06 12:41:40.199466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.630 BaseBdev3 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.630 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.630 [ 00:10:51.630 { 00:10:51.630 "name": "BaseBdev3", 00:10:51.630 "aliases": [ 00:10:51.630 "a7b7d626-b9fe-4ebb-8288-78b7c7c5bc7f" 00:10:51.630 ], 00:10:51.630 "product_name": "Malloc disk", 00:10:51.630 "block_size": 512, 00:10:51.630 "num_blocks": 65536, 00:10:51.630 "uuid": "a7b7d626-b9fe-4ebb-8288-78b7c7c5bc7f", 00:10:51.630 "assigned_rate_limits": { 00:10:51.630 "rw_ios_per_sec": 0, 00:10:51.630 "rw_mbytes_per_sec": 0, 00:10:51.630 "r_mbytes_per_sec": 0, 00:10:51.630 "w_mbytes_per_sec": 0 00:10:51.630 }, 00:10:51.630 "claimed": true, 00:10:51.630 "claim_type": "exclusive_write", 00:10:51.630 "zoned": false, 00:10:51.630 "supported_io_types": { 00:10:51.630 "read": true, 00:10:51.630 "write": true, 00:10:51.631 "unmap": true, 00:10:51.631 "flush": true, 00:10:51.631 "reset": true, 00:10:51.631 "nvme_admin": false, 00:10:51.631 "nvme_io": false, 00:10:51.631 "nvme_io_md": false, 00:10:51.631 "write_zeroes": true, 00:10:51.631 "zcopy": true, 00:10:51.631 "get_zone_info": false, 00:10:51.631 "zone_management": false, 00:10:51.631 "zone_append": false, 00:10:51.631 "compare": false, 00:10:51.631 "compare_and_write": false, 00:10:51.631 "abort": true, 00:10:51.631 "seek_hole": false, 00:10:51.631 "seek_data": false, 00:10:51.631 "copy": true, 00:10:51.631 "nvme_iov_md": false 00:10:51.631 }, 00:10:51.631 "memory_domains": [ 00:10:51.631 { 00:10:51.631 "dma_device_id": "system", 00:10:51.631 "dma_device_type": 1 00:10:51.631 }, 00:10:51.631 { 00:10:51.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.631 "dma_device_type": 2 00:10:51.631 } 00:10:51.631 ], 00:10:51.631 "driver_specific": {} 00:10:51.631 } 00:10:51.631 ] 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.631 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.889 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.889 "name": "Existed_Raid", 00:10:51.889 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:51.889 "strip_size_kb": 64, 00:10:51.889 "state": "configuring", 00:10:51.889 "raid_level": "raid0", 00:10:51.889 "superblock": true, 00:10:51.889 "num_base_bdevs": 4, 00:10:51.889 "num_base_bdevs_discovered": 3, 00:10:51.889 "num_base_bdevs_operational": 4, 00:10:51.889 "base_bdevs_list": [ 00:10:51.889 { 00:10:51.889 "name": "BaseBdev1", 00:10:51.889 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:51.889 "is_configured": true, 00:10:51.889 "data_offset": 2048, 00:10:51.889 "data_size": 63488 00:10:51.889 }, 00:10:51.889 { 00:10:51.889 "name": "BaseBdev2", 00:10:51.889 "uuid": "535d9654-56aa-40df-a407-2dcbe27cce82", 00:10:51.889 "is_configured": true, 00:10:51.889 "data_offset": 2048, 00:10:51.889 "data_size": 63488 00:10:51.889 }, 00:10:51.889 { 00:10:51.889 "name": "BaseBdev3", 00:10:51.889 "uuid": "a7b7d626-b9fe-4ebb-8288-78b7c7c5bc7f", 00:10:51.889 "is_configured": true, 00:10:51.889 "data_offset": 2048, 00:10:51.889 "data_size": 63488 00:10:51.889 }, 00:10:51.889 { 00:10:51.889 "name": "BaseBdev4", 00:10:51.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.889 "is_configured": false, 00:10:51.889 "data_offset": 0, 00:10:51.889 "data_size": 0 00:10:51.889 } 00:10:51.889 ] 00:10:51.889 }' 00:10:51.889 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.889 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:52.149 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.149 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.149 [2024-11-06 12:41:40.791121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.149 [2024-11-06 12:41:40.791477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:52.149 [2024-11-06 12:41:40.791499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.149 BaseBdev4 00:10:52.149 [2024-11-06 12:41:40.791837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.149 [2024-11-06 12:41:40.792045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:52.149 [2024-11-06 12:41:40.792068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:52.150 [2024-11-06 12:41:40.792264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.150 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.408 [ 00:10:52.408 { 00:10:52.408 "name": "BaseBdev4", 00:10:52.408 "aliases": [ 00:10:52.408 "81b0fbb9-739b-4b28-a511-e395f70e6c26" 00:10:52.408 ], 00:10:52.408 "product_name": "Malloc disk", 00:10:52.408 "block_size": 512, 00:10:52.408 "num_blocks": 65536, 00:10:52.408 "uuid": "81b0fbb9-739b-4b28-a511-e395f70e6c26", 00:10:52.408 "assigned_rate_limits": { 00:10:52.408 "rw_ios_per_sec": 0, 00:10:52.408 "rw_mbytes_per_sec": 0, 00:10:52.408 "r_mbytes_per_sec": 0, 00:10:52.408 "w_mbytes_per_sec": 0 00:10:52.408 }, 00:10:52.408 "claimed": true, 00:10:52.408 "claim_type": "exclusive_write", 00:10:52.408 "zoned": false, 00:10:52.408 "supported_io_types": { 00:10:52.408 "read": true, 00:10:52.408 "write": true, 00:10:52.408 "unmap": true, 00:10:52.408 "flush": true, 00:10:52.408 "reset": true, 00:10:52.408 "nvme_admin": false, 00:10:52.408 "nvme_io": false, 00:10:52.408 "nvme_io_md": false, 00:10:52.408 "write_zeroes": true, 00:10:52.408 "zcopy": true, 00:10:52.408 "get_zone_info": false, 00:10:52.408 "zone_management": false, 00:10:52.408 "zone_append": false, 00:10:52.408 "compare": false, 00:10:52.408 "compare_and_write": false, 00:10:52.408 "abort": true, 00:10:52.408 "seek_hole": false, 00:10:52.408 "seek_data": false, 00:10:52.408 "copy": true, 00:10:52.408 "nvme_iov_md": false 00:10:52.408 }, 00:10:52.408 "memory_domains": [ 00:10:52.408 { 00:10:52.408 "dma_device_id": "system", 00:10:52.408 "dma_device_type": 1 00:10:52.408 }, 00:10:52.408 { 00:10:52.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.408 "dma_device_type": 2 00:10:52.408 } 00:10:52.408 ], 00:10:52.408 "driver_specific": {} 00:10:52.408 } 00:10:52.408 ] 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.408 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.409 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.409 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.409 "name": "Existed_Raid", 00:10:52.409 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:52.409 "strip_size_kb": 64, 00:10:52.409 "state": "online", 00:10:52.409 "raid_level": "raid0", 00:10:52.409 "superblock": true, 00:10:52.409 "num_base_bdevs": 4, 00:10:52.409 "num_base_bdevs_discovered": 4, 00:10:52.409 "num_base_bdevs_operational": 4, 00:10:52.409 "base_bdevs_list": [ 00:10:52.409 { 00:10:52.409 "name": "BaseBdev1", 00:10:52.409 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:52.409 "is_configured": true, 00:10:52.409 "data_offset": 2048, 00:10:52.409 "data_size": 63488 00:10:52.409 }, 00:10:52.409 { 00:10:52.409 "name": "BaseBdev2", 00:10:52.409 "uuid": "535d9654-56aa-40df-a407-2dcbe27cce82", 00:10:52.409 "is_configured": true, 00:10:52.409 "data_offset": 2048, 00:10:52.409 "data_size": 63488 00:10:52.409 }, 00:10:52.409 { 00:10:52.409 "name": "BaseBdev3", 00:10:52.409 "uuid": "a7b7d626-b9fe-4ebb-8288-78b7c7c5bc7f", 00:10:52.409 "is_configured": true, 00:10:52.409 "data_offset": 2048, 00:10:52.409 "data_size": 63488 00:10:52.409 }, 00:10:52.409 { 00:10:52.409 "name": "BaseBdev4", 00:10:52.409 "uuid": "81b0fbb9-739b-4b28-a511-e395f70e6c26", 00:10:52.409 "is_configured": true, 00:10:52.409 "data_offset": 2048, 00:10:52.409 "data_size": 63488 00:10:52.409 } 00:10:52.409 ] 00:10:52.409 }' 00:10:52.409 12:41:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.409 12:41:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.974 [2024-11-06 12:41:41.340068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.974 "name": "Existed_Raid", 00:10:52.974 "aliases": [ 00:10:52.974 "953a549c-77c3-4b9f-91c8-d88ee13e84ba" 00:10:52.974 ], 00:10:52.974 "product_name": "Raid Volume", 00:10:52.974 "block_size": 512, 00:10:52.974 "num_blocks": 253952, 00:10:52.974 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:52.974 "assigned_rate_limits": { 00:10:52.974 "rw_ios_per_sec": 0, 00:10:52.974 "rw_mbytes_per_sec": 0, 00:10:52.974 "r_mbytes_per_sec": 0, 00:10:52.974 "w_mbytes_per_sec": 0 00:10:52.974 }, 00:10:52.974 "claimed": false, 00:10:52.974 "zoned": false, 00:10:52.974 "supported_io_types": { 00:10:52.974 "read": true, 00:10:52.974 "write": true, 00:10:52.974 "unmap": true, 00:10:52.974 "flush": true, 00:10:52.974 "reset": true, 00:10:52.974 "nvme_admin": false, 00:10:52.974 "nvme_io": false, 00:10:52.974 "nvme_io_md": false, 00:10:52.974 "write_zeroes": true, 00:10:52.974 "zcopy": false, 00:10:52.974 "get_zone_info": false, 00:10:52.974 "zone_management": false, 00:10:52.974 "zone_append": false, 00:10:52.974 "compare": false, 00:10:52.974 "compare_and_write": false, 00:10:52.974 "abort": false, 00:10:52.974 "seek_hole": false, 00:10:52.974 "seek_data": false, 00:10:52.974 "copy": false, 00:10:52.974 "nvme_iov_md": false 00:10:52.974 }, 00:10:52.974 "memory_domains": [ 00:10:52.974 { 00:10:52.974 "dma_device_id": "system", 00:10:52.974 "dma_device_type": 1 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.974 "dma_device_type": 2 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "system", 00:10:52.974 "dma_device_type": 1 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.974 "dma_device_type": 2 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "system", 00:10:52.974 "dma_device_type": 1 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.974 "dma_device_type": 2 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "system", 00:10:52.974 "dma_device_type": 1 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.974 "dma_device_type": 2 00:10:52.974 } 00:10:52.974 ], 00:10:52.974 "driver_specific": { 00:10:52.974 "raid": { 00:10:52.974 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:52.974 "strip_size_kb": 64, 00:10:52.974 "state": "online", 00:10:52.974 "raid_level": "raid0", 00:10:52.974 "superblock": true, 00:10:52.974 "num_base_bdevs": 4, 00:10:52.974 "num_base_bdevs_discovered": 4, 00:10:52.974 "num_base_bdevs_operational": 4, 00:10:52.974 "base_bdevs_list": [ 00:10:52.974 { 00:10:52.974 "name": "BaseBdev1", 00:10:52.974 "uuid": "8d5adede-008b-4231-b346-0069ded8786d", 00:10:52.974 "is_configured": true, 00:10:52.974 "data_offset": 2048, 00:10:52.974 "data_size": 63488 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "name": "BaseBdev2", 00:10:52.974 "uuid": "535d9654-56aa-40df-a407-2dcbe27cce82", 00:10:52.974 "is_configured": true, 00:10:52.974 "data_offset": 2048, 00:10:52.974 "data_size": 63488 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "name": "BaseBdev3", 00:10:52.974 "uuid": "a7b7d626-b9fe-4ebb-8288-78b7c7c5bc7f", 00:10:52.974 "is_configured": true, 00:10:52.974 "data_offset": 2048, 00:10:52.974 "data_size": 63488 00:10:52.974 }, 00:10:52.974 { 00:10:52.974 "name": "BaseBdev4", 00:10:52.974 "uuid": "81b0fbb9-739b-4b28-a511-e395f70e6c26", 00:10:52.974 "is_configured": true, 00:10:52.974 "data_offset": 2048, 00:10:52.974 "data_size": 63488 00:10:52.974 } 00:10:52.974 ] 00:10:52.974 } 00:10:52.974 } 00:10:52.974 }' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:52.974 BaseBdev2 00:10:52.974 BaseBdev3 00:10:52.974 BaseBdev4' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.974 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.975 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.233 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.233 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.233 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.233 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.233 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.234 [2024-11-06 12:41:41.695721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.234 [2024-11-06 12:41:41.696039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.234 [2024-11-06 12:41:41.696152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.234 "name": "Existed_Raid", 00:10:53.234 "uuid": "953a549c-77c3-4b9f-91c8-d88ee13e84ba", 00:10:53.234 "strip_size_kb": 64, 00:10:53.234 "state": "offline", 00:10:53.234 "raid_level": "raid0", 00:10:53.234 "superblock": true, 00:10:53.234 "num_base_bdevs": 4, 00:10:53.234 "num_base_bdevs_discovered": 3, 00:10:53.234 "num_base_bdevs_operational": 3, 00:10:53.234 "base_bdevs_list": [ 00:10:53.234 { 00:10:53.234 "name": null, 00:10:53.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.234 "is_configured": false, 00:10:53.234 "data_offset": 0, 00:10:53.234 "data_size": 63488 00:10:53.234 }, 00:10:53.234 { 00:10:53.234 "name": "BaseBdev2", 00:10:53.234 "uuid": "535d9654-56aa-40df-a407-2dcbe27cce82", 00:10:53.234 "is_configured": true, 00:10:53.234 "data_offset": 2048, 00:10:53.234 "data_size": 63488 00:10:53.234 }, 00:10:53.234 { 00:10:53.234 "name": "BaseBdev3", 00:10:53.234 "uuid": "a7b7d626-b9fe-4ebb-8288-78b7c7c5bc7f", 00:10:53.234 "is_configured": true, 00:10:53.234 "data_offset": 2048, 00:10:53.234 "data_size": 63488 00:10:53.234 }, 00:10:53.234 { 00:10:53.234 "name": "BaseBdev4", 00:10:53.234 "uuid": "81b0fbb9-739b-4b28-a511-e395f70e6c26", 00:10:53.234 "is_configured": true, 00:10:53.234 "data_offset": 2048, 00:10:53.234 "data_size": 63488 00:10:53.234 } 00:10:53.234 ] 00:10:53.234 }' 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.234 12:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.801 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:53.801 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:53.801 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.801 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:53.801 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.802 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 [2024-11-06 12:41:42.367426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.060 [2024-11-06 12:41:42.515883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.060 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.060 [2024-11-06 12:41:42.653028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:54.060 [2024-11-06 12:41:42.653089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 BaseBdev2 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 [ 00:10:54.320 { 00:10:54.320 "name": "BaseBdev2", 00:10:54.320 "aliases": [ 00:10:54.320 "93badcc0-9a31-4772-9716-7a0de16b5657" 00:10:54.320 ], 00:10:54.320 "product_name": "Malloc disk", 00:10:54.320 "block_size": 512, 00:10:54.320 "num_blocks": 65536, 00:10:54.320 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:54.320 "assigned_rate_limits": { 00:10:54.320 "rw_ios_per_sec": 0, 00:10:54.320 "rw_mbytes_per_sec": 0, 00:10:54.320 "r_mbytes_per_sec": 0, 00:10:54.320 "w_mbytes_per_sec": 0 00:10:54.320 }, 00:10:54.320 "claimed": false, 00:10:54.320 "zoned": false, 00:10:54.320 "supported_io_types": { 00:10:54.320 "read": true, 00:10:54.320 "write": true, 00:10:54.320 "unmap": true, 00:10:54.320 "flush": true, 00:10:54.320 "reset": true, 00:10:54.320 "nvme_admin": false, 00:10:54.320 "nvme_io": false, 00:10:54.320 "nvme_io_md": false, 00:10:54.320 "write_zeroes": true, 00:10:54.320 "zcopy": true, 00:10:54.320 "get_zone_info": false, 00:10:54.320 "zone_management": false, 00:10:54.320 "zone_append": false, 00:10:54.320 "compare": false, 00:10:54.320 "compare_and_write": false, 00:10:54.320 "abort": true, 00:10:54.320 "seek_hole": false, 00:10:54.320 "seek_data": false, 00:10:54.320 "copy": true, 00:10:54.320 "nvme_iov_md": false 00:10:54.320 }, 00:10:54.320 "memory_domains": [ 00:10:54.320 { 00:10:54.320 "dma_device_id": "system", 00:10:54.320 "dma_device_type": 1 00:10:54.320 }, 00:10:54.320 { 00:10:54.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.320 "dma_device_type": 2 00:10:54.320 } 00:10:54.320 ], 00:10:54.320 "driver_specific": {} 00:10:54.320 } 00:10:54.320 ] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 BaseBdev3 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.320 [ 00:10:54.320 { 00:10:54.320 "name": "BaseBdev3", 00:10:54.320 "aliases": [ 00:10:54.320 "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4" 00:10:54.320 ], 00:10:54.320 "product_name": "Malloc disk", 00:10:54.320 "block_size": 512, 00:10:54.320 "num_blocks": 65536, 00:10:54.320 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:54.320 "assigned_rate_limits": { 00:10:54.320 "rw_ios_per_sec": 0, 00:10:54.320 "rw_mbytes_per_sec": 0, 00:10:54.320 "r_mbytes_per_sec": 0, 00:10:54.320 "w_mbytes_per_sec": 0 00:10:54.320 }, 00:10:54.320 "claimed": false, 00:10:54.320 "zoned": false, 00:10:54.320 "supported_io_types": { 00:10:54.320 "read": true, 00:10:54.320 "write": true, 00:10:54.320 "unmap": true, 00:10:54.320 "flush": true, 00:10:54.320 "reset": true, 00:10:54.320 "nvme_admin": false, 00:10:54.320 "nvme_io": false, 00:10:54.320 "nvme_io_md": false, 00:10:54.320 "write_zeroes": true, 00:10:54.320 "zcopy": true, 00:10:54.320 "get_zone_info": false, 00:10:54.320 "zone_management": false, 00:10:54.320 "zone_append": false, 00:10:54.320 "compare": false, 00:10:54.320 "compare_and_write": false, 00:10:54.320 "abort": true, 00:10:54.320 "seek_hole": false, 00:10:54.320 "seek_data": false, 00:10:54.320 "copy": true, 00:10:54.320 "nvme_iov_md": false 00:10:54.320 }, 00:10:54.320 "memory_domains": [ 00:10:54.320 { 00:10:54.320 "dma_device_id": "system", 00:10:54.320 "dma_device_type": 1 00:10:54.320 }, 00:10:54.320 { 00:10:54.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.320 "dma_device_type": 2 00:10:54.320 } 00:10:54.320 ], 00:10:54.320 "driver_specific": {} 00:10:54.320 } 00:10:54.320 ] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.320 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 BaseBdev4 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.580 12:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 [ 00:10:54.580 { 00:10:54.580 "name": "BaseBdev4", 00:10:54.580 "aliases": [ 00:10:54.580 "8134d1ea-dde0-4e83-b11e-5c592af29027" 00:10:54.580 ], 00:10:54.580 "product_name": "Malloc disk", 00:10:54.580 "block_size": 512, 00:10:54.580 "num_blocks": 65536, 00:10:54.580 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:54.580 "assigned_rate_limits": { 00:10:54.580 "rw_ios_per_sec": 0, 00:10:54.580 "rw_mbytes_per_sec": 0, 00:10:54.580 "r_mbytes_per_sec": 0, 00:10:54.580 "w_mbytes_per_sec": 0 00:10:54.580 }, 00:10:54.580 "claimed": false, 00:10:54.580 "zoned": false, 00:10:54.580 "supported_io_types": { 00:10:54.580 "read": true, 00:10:54.580 "write": true, 00:10:54.580 "unmap": true, 00:10:54.580 "flush": true, 00:10:54.580 "reset": true, 00:10:54.580 "nvme_admin": false, 00:10:54.580 "nvme_io": false, 00:10:54.580 "nvme_io_md": false, 00:10:54.580 "write_zeroes": true, 00:10:54.580 "zcopy": true, 00:10:54.580 "get_zone_info": false, 00:10:54.580 "zone_management": false, 00:10:54.580 "zone_append": false, 00:10:54.580 "compare": false, 00:10:54.580 "compare_and_write": false, 00:10:54.580 "abort": true, 00:10:54.580 "seek_hole": false, 00:10:54.580 "seek_data": false, 00:10:54.580 "copy": true, 00:10:54.580 "nvme_iov_md": false 00:10:54.580 }, 00:10:54.580 "memory_domains": [ 00:10:54.580 { 00:10:54.580 "dma_device_id": "system", 00:10:54.580 "dma_device_type": 1 00:10:54.580 }, 00:10:54.580 { 00:10:54.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.580 "dma_device_type": 2 00:10:54.580 } 00:10:54.580 ], 00:10:54.580 "driver_specific": {} 00:10:54.580 } 00:10:54.580 ] 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 [2024-11-06 12:41:43.025799] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.580 [2024-11-06 12:41:43.025860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.580 [2024-11-06 12:41:43.025898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.580 [2024-11-06 12:41:43.028502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.580 [2024-11-06 12:41:43.028580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.580 "name": "Existed_Raid", 00:10:54.580 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:54.580 "strip_size_kb": 64, 00:10:54.580 "state": "configuring", 00:10:54.580 "raid_level": "raid0", 00:10:54.580 "superblock": true, 00:10:54.580 "num_base_bdevs": 4, 00:10:54.580 "num_base_bdevs_discovered": 3, 00:10:54.580 "num_base_bdevs_operational": 4, 00:10:54.580 "base_bdevs_list": [ 00:10:54.580 { 00:10:54.580 "name": "BaseBdev1", 00:10:54.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.580 "is_configured": false, 00:10:54.580 "data_offset": 0, 00:10:54.580 "data_size": 0 00:10:54.580 }, 00:10:54.580 { 00:10:54.580 "name": "BaseBdev2", 00:10:54.580 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:54.580 "is_configured": true, 00:10:54.580 "data_offset": 2048, 00:10:54.580 "data_size": 63488 00:10:54.580 }, 00:10:54.580 { 00:10:54.580 "name": "BaseBdev3", 00:10:54.580 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:54.580 "is_configured": true, 00:10:54.580 "data_offset": 2048, 00:10:54.580 "data_size": 63488 00:10:54.580 }, 00:10:54.580 { 00:10:54.580 "name": "BaseBdev4", 00:10:54.580 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:54.580 "is_configured": true, 00:10:54.580 "data_offset": 2048, 00:10:54.580 "data_size": 63488 00:10:54.580 } 00:10:54.580 ] 00:10:54.580 }' 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.580 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.148 [2024-11-06 12:41:43.553942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.148 "name": "Existed_Raid", 00:10:55.148 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:55.148 "strip_size_kb": 64, 00:10:55.148 "state": "configuring", 00:10:55.148 "raid_level": "raid0", 00:10:55.148 "superblock": true, 00:10:55.148 "num_base_bdevs": 4, 00:10:55.148 "num_base_bdevs_discovered": 2, 00:10:55.148 "num_base_bdevs_operational": 4, 00:10:55.148 "base_bdevs_list": [ 00:10:55.148 { 00:10:55.148 "name": "BaseBdev1", 00:10:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.148 "is_configured": false, 00:10:55.148 "data_offset": 0, 00:10:55.148 "data_size": 0 00:10:55.148 }, 00:10:55.148 { 00:10:55.148 "name": null, 00:10:55.148 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:55.148 "is_configured": false, 00:10:55.148 "data_offset": 0, 00:10:55.148 "data_size": 63488 00:10:55.148 }, 00:10:55.148 { 00:10:55.148 "name": "BaseBdev3", 00:10:55.148 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:55.148 "is_configured": true, 00:10:55.148 "data_offset": 2048, 00:10:55.148 "data_size": 63488 00:10:55.148 }, 00:10:55.148 { 00:10:55.148 "name": "BaseBdev4", 00:10:55.148 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:55.148 "is_configured": true, 00:10:55.148 "data_offset": 2048, 00:10:55.148 "data_size": 63488 00:10:55.148 } 00:10:55.148 ] 00:10:55.148 }' 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.148 12:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 [2024-11-06 12:41:44.176181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.716 BaseBdev1 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 [ 00:10:55.716 { 00:10:55.716 "name": "BaseBdev1", 00:10:55.716 "aliases": [ 00:10:55.716 "415216df-5d62-460b-8936-88eecb0583d5" 00:10:55.716 ], 00:10:55.716 "product_name": "Malloc disk", 00:10:55.716 "block_size": 512, 00:10:55.716 "num_blocks": 65536, 00:10:55.716 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:55.716 "assigned_rate_limits": { 00:10:55.716 "rw_ios_per_sec": 0, 00:10:55.716 "rw_mbytes_per_sec": 0, 00:10:55.716 "r_mbytes_per_sec": 0, 00:10:55.716 "w_mbytes_per_sec": 0 00:10:55.716 }, 00:10:55.716 "claimed": true, 00:10:55.716 "claim_type": "exclusive_write", 00:10:55.716 "zoned": false, 00:10:55.716 "supported_io_types": { 00:10:55.716 "read": true, 00:10:55.716 "write": true, 00:10:55.716 "unmap": true, 00:10:55.716 "flush": true, 00:10:55.716 "reset": true, 00:10:55.716 "nvme_admin": false, 00:10:55.716 "nvme_io": false, 00:10:55.716 "nvme_io_md": false, 00:10:55.716 "write_zeroes": true, 00:10:55.716 "zcopy": true, 00:10:55.716 "get_zone_info": false, 00:10:55.716 "zone_management": false, 00:10:55.716 "zone_append": false, 00:10:55.716 "compare": false, 00:10:55.716 "compare_and_write": false, 00:10:55.716 "abort": true, 00:10:55.716 "seek_hole": false, 00:10:55.716 "seek_data": false, 00:10:55.716 "copy": true, 00:10:55.716 "nvme_iov_md": false 00:10:55.716 }, 00:10:55.716 "memory_domains": [ 00:10:55.716 { 00:10:55.716 "dma_device_id": "system", 00:10:55.716 "dma_device_type": 1 00:10:55.716 }, 00:10:55.716 { 00:10:55.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.716 "dma_device_type": 2 00:10:55.716 } 00:10:55.716 ], 00:10:55.716 "driver_specific": {} 00:10:55.716 } 00:10:55.716 ] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.716 "name": "Existed_Raid", 00:10:55.716 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:55.716 "strip_size_kb": 64, 00:10:55.716 "state": "configuring", 00:10:55.716 "raid_level": "raid0", 00:10:55.716 "superblock": true, 00:10:55.716 "num_base_bdevs": 4, 00:10:55.716 "num_base_bdevs_discovered": 3, 00:10:55.716 "num_base_bdevs_operational": 4, 00:10:55.716 "base_bdevs_list": [ 00:10:55.716 { 00:10:55.716 "name": "BaseBdev1", 00:10:55.716 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:55.716 "is_configured": true, 00:10:55.716 "data_offset": 2048, 00:10:55.716 "data_size": 63488 00:10:55.716 }, 00:10:55.716 { 00:10:55.716 "name": null, 00:10:55.716 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:55.716 "is_configured": false, 00:10:55.716 "data_offset": 0, 00:10:55.716 "data_size": 63488 00:10:55.716 }, 00:10:55.716 { 00:10:55.716 "name": "BaseBdev3", 00:10:55.716 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:55.716 "is_configured": true, 00:10:55.716 "data_offset": 2048, 00:10:55.716 "data_size": 63488 00:10:55.716 }, 00:10:55.716 { 00:10:55.716 "name": "BaseBdev4", 00:10:55.716 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:55.716 "is_configured": true, 00:10:55.716 "data_offset": 2048, 00:10:55.716 "data_size": 63488 00:10:55.716 } 00:10:55.716 ] 00:10:55.716 }' 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.716 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.283 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.283 [2024-11-06 12:41:44.804525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.284 "name": "Existed_Raid", 00:10:56.284 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:56.284 "strip_size_kb": 64, 00:10:56.284 "state": "configuring", 00:10:56.284 "raid_level": "raid0", 00:10:56.284 "superblock": true, 00:10:56.284 "num_base_bdevs": 4, 00:10:56.284 "num_base_bdevs_discovered": 2, 00:10:56.284 "num_base_bdevs_operational": 4, 00:10:56.284 "base_bdevs_list": [ 00:10:56.284 { 00:10:56.284 "name": "BaseBdev1", 00:10:56.284 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:56.284 "is_configured": true, 00:10:56.284 "data_offset": 2048, 00:10:56.284 "data_size": 63488 00:10:56.284 }, 00:10:56.284 { 00:10:56.284 "name": null, 00:10:56.284 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:56.284 "is_configured": false, 00:10:56.284 "data_offset": 0, 00:10:56.284 "data_size": 63488 00:10:56.284 }, 00:10:56.284 { 00:10:56.284 "name": null, 00:10:56.284 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:56.284 "is_configured": false, 00:10:56.284 "data_offset": 0, 00:10:56.284 "data_size": 63488 00:10:56.284 }, 00:10:56.284 { 00:10:56.284 "name": "BaseBdev4", 00:10:56.284 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:56.284 "is_configured": true, 00:10:56.284 "data_offset": 2048, 00:10:56.284 "data_size": 63488 00:10:56.284 } 00:10:56.284 ] 00:10:56.284 }' 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.284 12:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.851 [2024-11-06 12:41:45.376668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.851 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.851 "name": "Existed_Raid", 00:10:56.851 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:56.851 "strip_size_kb": 64, 00:10:56.851 "state": "configuring", 00:10:56.851 "raid_level": "raid0", 00:10:56.851 "superblock": true, 00:10:56.851 "num_base_bdevs": 4, 00:10:56.851 "num_base_bdevs_discovered": 3, 00:10:56.851 "num_base_bdevs_operational": 4, 00:10:56.851 "base_bdevs_list": [ 00:10:56.851 { 00:10:56.851 "name": "BaseBdev1", 00:10:56.851 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:56.851 "is_configured": true, 00:10:56.851 "data_offset": 2048, 00:10:56.851 "data_size": 63488 00:10:56.851 }, 00:10:56.851 { 00:10:56.851 "name": null, 00:10:56.851 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:56.851 "is_configured": false, 00:10:56.852 "data_offset": 0, 00:10:56.852 "data_size": 63488 00:10:56.852 }, 00:10:56.852 { 00:10:56.852 "name": "BaseBdev3", 00:10:56.852 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:56.852 "is_configured": true, 00:10:56.852 "data_offset": 2048, 00:10:56.852 "data_size": 63488 00:10:56.852 }, 00:10:56.852 { 00:10:56.852 "name": "BaseBdev4", 00:10:56.852 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:56.852 "is_configured": true, 00:10:56.852 "data_offset": 2048, 00:10:56.852 "data_size": 63488 00:10:56.852 } 00:10:56.852 ] 00:10:56.852 }' 00:10:56.852 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.852 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.419 12:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.419 [2024-11-06 12:41:45.924842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.419 "name": "Existed_Raid", 00:10:57.419 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:57.419 "strip_size_kb": 64, 00:10:57.419 "state": "configuring", 00:10:57.419 "raid_level": "raid0", 00:10:57.419 "superblock": true, 00:10:57.419 "num_base_bdevs": 4, 00:10:57.419 "num_base_bdevs_discovered": 2, 00:10:57.419 "num_base_bdevs_operational": 4, 00:10:57.419 "base_bdevs_list": [ 00:10:57.419 { 00:10:57.419 "name": null, 00:10:57.419 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:57.419 "is_configured": false, 00:10:57.419 "data_offset": 0, 00:10:57.419 "data_size": 63488 00:10:57.419 }, 00:10:57.419 { 00:10:57.419 "name": null, 00:10:57.419 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:57.419 "is_configured": false, 00:10:57.419 "data_offset": 0, 00:10:57.419 "data_size": 63488 00:10:57.419 }, 00:10:57.419 { 00:10:57.419 "name": "BaseBdev3", 00:10:57.419 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:57.419 "is_configured": true, 00:10:57.419 "data_offset": 2048, 00:10:57.419 "data_size": 63488 00:10:57.419 }, 00:10:57.419 { 00:10:57.419 "name": "BaseBdev4", 00:10:57.419 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:57.419 "is_configured": true, 00:10:57.419 "data_offset": 2048, 00:10:57.419 "data_size": 63488 00:10:57.419 } 00:10:57.419 ] 00:10:57.419 }' 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.419 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.984 [2024-11-06 12:41:46.549952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.984 "name": "Existed_Raid", 00:10:57.984 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:57.984 "strip_size_kb": 64, 00:10:57.984 "state": "configuring", 00:10:57.984 "raid_level": "raid0", 00:10:57.984 "superblock": true, 00:10:57.984 "num_base_bdevs": 4, 00:10:57.984 "num_base_bdevs_discovered": 3, 00:10:57.984 "num_base_bdevs_operational": 4, 00:10:57.984 "base_bdevs_list": [ 00:10:57.984 { 00:10:57.984 "name": null, 00:10:57.984 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:57.984 "is_configured": false, 00:10:57.984 "data_offset": 0, 00:10:57.984 "data_size": 63488 00:10:57.984 }, 00:10:57.984 { 00:10:57.984 "name": "BaseBdev2", 00:10:57.984 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:57.984 "is_configured": true, 00:10:57.984 "data_offset": 2048, 00:10:57.984 "data_size": 63488 00:10:57.984 }, 00:10:57.984 { 00:10:57.984 "name": "BaseBdev3", 00:10:57.984 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:57.984 "is_configured": true, 00:10:57.984 "data_offset": 2048, 00:10:57.984 "data_size": 63488 00:10:57.984 }, 00:10:57.984 { 00:10:57.984 "name": "BaseBdev4", 00:10:57.984 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:57.984 "is_configured": true, 00:10:57.984 "data_offset": 2048, 00:10:57.984 "data_size": 63488 00:10:57.984 } 00:10:57.984 ] 00:10:57.984 }' 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.984 12:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 415216df-5d62-460b-8936-88eecb0583d5 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 [2024-11-06 12:41:47.184005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:58.550 [2024-11-06 12:41:47.184656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:58.550 [2024-11-06 12:41:47.184682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.550 [2024-11-06 12:41:47.185085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:58.550 NewBaseBdev 00:10:58.550 [2024-11-06 12:41:47.185310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:58.550 [2024-11-06 12:41:47.185334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:58.550 [2024-11-06 12:41:47.185505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.550 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.551 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.809 [ 00:10:58.809 { 00:10:58.809 "name": "NewBaseBdev", 00:10:58.809 "aliases": [ 00:10:58.809 "415216df-5d62-460b-8936-88eecb0583d5" 00:10:58.809 ], 00:10:58.809 "product_name": "Malloc disk", 00:10:58.809 "block_size": 512, 00:10:58.809 "num_blocks": 65536, 00:10:58.809 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:58.809 "assigned_rate_limits": { 00:10:58.809 "rw_ios_per_sec": 0, 00:10:58.809 "rw_mbytes_per_sec": 0, 00:10:58.809 "r_mbytes_per_sec": 0, 00:10:58.809 "w_mbytes_per_sec": 0 00:10:58.809 }, 00:10:58.809 "claimed": true, 00:10:58.809 "claim_type": "exclusive_write", 00:10:58.809 "zoned": false, 00:10:58.809 "supported_io_types": { 00:10:58.809 "read": true, 00:10:58.809 "write": true, 00:10:58.809 "unmap": true, 00:10:58.809 "flush": true, 00:10:58.809 "reset": true, 00:10:58.809 "nvme_admin": false, 00:10:58.809 "nvme_io": false, 00:10:58.809 "nvme_io_md": false, 00:10:58.809 "write_zeroes": true, 00:10:58.809 "zcopy": true, 00:10:58.809 "get_zone_info": false, 00:10:58.809 "zone_management": false, 00:10:58.809 "zone_append": false, 00:10:58.809 "compare": false, 00:10:58.809 "compare_and_write": false, 00:10:58.809 "abort": true, 00:10:58.809 "seek_hole": false, 00:10:58.809 "seek_data": false, 00:10:58.809 "copy": true, 00:10:58.809 "nvme_iov_md": false 00:10:58.809 }, 00:10:58.809 "memory_domains": [ 00:10:58.809 { 00:10:58.809 "dma_device_id": "system", 00:10:58.809 "dma_device_type": 1 00:10:58.809 }, 00:10:58.809 { 00:10:58.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.809 "dma_device_type": 2 00:10:58.809 } 00:10:58.809 ], 00:10:58.809 "driver_specific": {} 00:10:58.809 } 00:10:58.809 ] 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.810 "name": "Existed_Raid", 00:10:58.810 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:58.810 "strip_size_kb": 64, 00:10:58.810 "state": "online", 00:10:58.810 "raid_level": "raid0", 00:10:58.810 "superblock": true, 00:10:58.810 "num_base_bdevs": 4, 00:10:58.810 "num_base_bdevs_discovered": 4, 00:10:58.810 "num_base_bdevs_operational": 4, 00:10:58.810 "base_bdevs_list": [ 00:10:58.810 { 00:10:58.810 "name": "NewBaseBdev", 00:10:58.810 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:58.810 "is_configured": true, 00:10:58.810 "data_offset": 2048, 00:10:58.810 "data_size": 63488 00:10:58.810 }, 00:10:58.810 { 00:10:58.810 "name": "BaseBdev2", 00:10:58.810 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:58.810 "is_configured": true, 00:10:58.810 "data_offset": 2048, 00:10:58.810 "data_size": 63488 00:10:58.810 }, 00:10:58.810 { 00:10:58.810 "name": "BaseBdev3", 00:10:58.810 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:58.810 "is_configured": true, 00:10:58.810 "data_offset": 2048, 00:10:58.810 "data_size": 63488 00:10:58.810 }, 00:10:58.810 { 00:10:58.810 "name": "BaseBdev4", 00:10:58.810 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:58.810 "is_configured": true, 00:10:58.810 "data_offset": 2048, 00:10:58.810 "data_size": 63488 00:10:58.810 } 00:10:58.810 ] 00:10:58.810 }' 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.810 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.377 [2024-11-06 12:41:47.752972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.377 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.377 "name": "Existed_Raid", 00:10:59.377 "aliases": [ 00:10:59.377 "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200" 00:10:59.377 ], 00:10:59.377 "product_name": "Raid Volume", 00:10:59.377 "block_size": 512, 00:10:59.377 "num_blocks": 253952, 00:10:59.377 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:59.377 "assigned_rate_limits": { 00:10:59.377 "rw_ios_per_sec": 0, 00:10:59.377 "rw_mbytes_per_sec": 0, 00:10:59.377 "r_mbytes_per_sec": 0, 00:10:59.377 "w_mbytes_per_sec": 0 00:10:59.377 }, 00:10:59.377 "claimed": false, 00:10:59.377 "zoned": false, 00:10:59.377 "supported_io_types": { 00:10:59.377 "read": true, 00:10:59.377 "write": true, 00:10:59.377 "unmap": true, 00:10:59.377 "flush": true, 00:10:59.377 "reset": true, 00:10:59.377 "nvme_admin": false, 00:10:59.377 "nvme_io": false, 00:10:59.377 "nvme_io_md": false, 00:10:59.377 "write_zeroes": true, 00:10:59.377 "zcopy": false, 00:10:59.377 "get_zone_info": false, 00:10:59.377 "zone_management": false, 00:10:59.377 "zone_append": false, 00:10:59.377 "compare": false, 00:10:59.377 "compare_and_write": false, 00:10:59.377 "abort": false, 00:10:59.377 "seek_hole": false, 00:10:59.377 "seek_data": false, 00:10:59.377 "copy": false, 00:10:59.377 "nvme_iov_md": false 00:10:59.377 }, 00:10:59.377 "memory_domains": [ 00:10:59.377 { 00:10:59.377 "dma_device_id": "system", 00:10:59.377 "dma_device_type": 1 00:10:59.377 }, 00:10:59.377 { 00:10:59.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.377 "dma_device_type": 2 00:10:59.377 }, 00:10:59.377 { 00:10:59.377 "dma_device_id": "system", 00:10:59.377 "dma_device_type": 1 00:10:59.377 }, 00:10:59.377 { 00:10:59.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.377 "dma_device_type": 2 00:10:59.377 }, 00:10:59.377 { 00:10:59.377 "dma_device_id": "system", 00:10:59.377 "dma_device_type": 1 00:10:59.377 }, 00:10:59.377 { 00:10:59.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.377 "dma_device_type": 2 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "dma_device_id": "system", 00:10:59.378 "dma_device_type": 1 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.378 "dma_device_type": 2 00:10:59.378 } 00:10:59.378 ], 00:10:59.378 "driver_specific": { 00:10:59.378 "raid": { 00:10:59.378 "uuid": "679ea4c9-eecc-4b62-a9a0-4a6cb9f29200", 00:10:59.378 "strip_size_kb": 64, 00:10:59.378 "state": "online", 00:10:59.378 "raid_level": "raid0", 00:10:59.378 "superblock": true, 00:10:59.378 "num_base_bdevs": 4, 00:10:59.378 "num_base_bdevs_discovered": 4, 00:10:59.378 "num_base_bdevs_operational": 4, 00:10:59.378 "base_bdevs_list": [ 00:10:59.378 { 00:10:59.378 "name": "NewBaseBdev", 00:10:59.378 "uuid": "415216df-5d62-460b-8936-88eecb0583d5", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "name": "BaseBdev2", 00:10:59.378 "uuid": "93badcc0-9a31-4772-9716-7a0de16b5657", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "name": "BaseBdev3", 00:10:59.378 "uuid": "ca5e90d3-8003-4be6-9d39-c68f0d3f1fb4", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 }, 00:10:59.378 { 00:10:59.378 "name": "BaseBdev4", 00:10:59.378 "uuid": "8134d1ea-dde0-4e83-b11e-5c592af29027", 00:10:59.378 "is_configured": true, 00:10:59.378 "data_offset": 2048, 00:10:59.378 "data_size": 63488 00:10:59.378 } 00:10:59.378 ] 00:10:59.378 } 00:10:59.378 } 00:10:59.378 }' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:59.378 BaseBdev2 00:10:59.378 BaseBdev3 00:10:59.378 BaseBdev4' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.378 12:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.378 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.378 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.378 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.637 [2024-11-06 12:41:48.148392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:59.637 [2024-11-06 12:41:48.148436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.637 [2024-11-06 12:41:48.148567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.637 [2024-11-06 12:41:48.148677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.637 [2024-11-06 12:41:48.148696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70166 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70166 ']' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70166 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70166 00:10:59.637 killing process with pid 70166 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70166' 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70166 00:10:59.637 [2024-11-06 12:41:48.188460] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.637 12:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70166 00:11:00.203 [2024-11-06 12:41:48.564531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.135 12:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.135 00:11:01.135 real 0m13.041s 00:11:01.135 user 0m21.388s 00:11:01.135 sys 0m1.873s 00:11:01.135 12:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.135 12:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.135 ************************************ 00:11:01.135 END TEST raid_state_function_test_sb 00:11:01.135 ************************************ 00:11:01.463 12:41:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:01.463 12:41:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:01.463 12:41:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.463 12:41:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.463 ************************************ 00:11:01.463 START TEST raid_superblock_test 00:11:01.463 ************************************ 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:01.463 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70857 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70857 00:11:01.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70857 ']' 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:01.464 12:41:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.464 [2024-11-06 12:41:49.958014] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:01.464 [2024-11-06 12:41:49.958178] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70857 ] 00:11:01.721 [2024-11-06 12:41:50.144121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.721 [2024-11-06 12:41:50.322931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.978 [2024-11-06 12:41:50.574324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.978 [2024-11-06 12:41:50.574369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.544 malloc1 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.544 [2024-11-06 12:41:51.066548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.544 [2024-11-06 12:41:51.066643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.544 [2024-11-06 12:41:51.066683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.544 [2024-11-06 12:41:51.066705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.544 [2024-11-06 12:41:51.069960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.544 [2024-11-06 12:41:51.070158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.544 pt1 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.544 malloc2 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.544 [2024-11-06 12:41:51.130934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.544 [2024-11-06 12:41:51.131032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.544 [2024-11-06 12:41:51.131078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:02.544 [2024-11-06 12:41:51.131098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.544 [2024-11-06 12:41:51.134796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.544 [2024-11-06 12:41:51.134860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.544 pt2 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.544 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 malloc3 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 [2024-11-06 12:41:51.209338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.804 [2024-11-06 12:41:51.209428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.804 [2024-11-06 12:41:51.209482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:02.804 [2024-11-06 12:41:51.209509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.804 [2024-11-06 12:41:51.213376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.804 [2024-11-06 12:41:51.213458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.804 pt3 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 malloc4 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.804 [2024-11-06 12:41:51.276964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:02.804 [2024-11-06 12:41:51.277218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.804 [2024-11-06 12:41:51.277434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:02.804 [2024-11-06 12:41:51.277621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.804 [2024-11-06 12:41:51.281681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.804 [2024-11-06 12:41:51.281885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:02.804 pt4 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.804 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.805 [2024-11-06 12:41:51.290374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:02.805 [2024-11-06 12:41:51.293583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.805 [2024-11-06 12:41:51.293698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.805 [2024-11-06 12:41:51.293841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:02.805 [2024-11-06 12:41:51.294178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:02.805 [2024-11-06 12:41:51.294231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.805 [2024-11-06 12:41:51.294657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.805 [2024-11-06 12:41:51.294933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:02.805 [2024-11-06 12:41:51.294958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:02.805 [2024-11-06 12:41:51.295294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.805 "name": "raid_bdev1", 00:11:02.805 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:02.805 "strip_size_kb": 64, 00:11:02.805 "state": "online", 00:11:02.805 "raid_level": "raid0", 00:11:02.805 "superblock": true, 00:11:02.805 "num_base_bdevs": 4, 00:11:02.805 "num_base_bdevs_discovered": 4, 00:11:02.805 "num_base_bdevs_operational": 4, 00:11:02.805 "base_bdevs_list": [ 00:11:02.805 { 00:11:02.805 "name": "pt1", 00:11:02.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.805 "is_configured": true, 00:11:02.805 "data_offset": 2048, 00:11:02.805 "data_size": 63488 00:11:02.805 }, 00:11:02.805 { 00:11:02.805 "name": "pt2", 00:11:02.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.805 "is_configured": true, 00:11:02.805 "data_offset": 2048, 00:11:02.805 "data_size": 63488 00:11:02.805 }, 00:11:02.805 { 00:11:02.805 "name": "pt3", 00:11:02.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.805 "is_configured": true, 00:11:02.805 "data_offset": 2048, 00:11:02.805 "data_size": 63488 00:11:02.805 }, 00:11:02.805 { 00:11:02.805 "name": "pt4", 00:11:02.805 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:02.805 "is_configured": true, 00:11:02.805 "data_offset": 2048, 00:11:02.805 "data_size": 63488 00:11:02.805 } 00:11:02.805 ] 00:11:02.805 }' 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.805 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.371 [2024-11-06 12:41:51.814979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.371 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.371 "name": "raid_bdev1", 00:11:03.371 "aliases": [ 00:11:03.371 "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8" 00:11:03.371 ], 00:11:03.371 "product_name": "Raid Volume", 00:11:03.371 "block_size": 512, 00:11:03.371 "num_blocks": 253952, 00:11:03.371 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:03.371 "assigned_rate_limits": { 00:11:03.371 "rw_ios_per_sec": 0, 00:11:03.371 "rw_mbytes_per_sec": 0, 00:11:03.371 "r_mbytes_per_sec": 0, 00:11:03.371 "w_mbytes_per_sec": 0 00:11:03.371 }, 00:11:03.371 "claimed": false, 00:11:03.371 "zoned": false, 00:11:03.371 "supported_io_types": { 00:11:03.371 "read": true, 00:11:03.371 "write": true, 00:11:03.371 "unmap": true, 00:11:03.371 "flush": true, 00:11:03.371 "reset": true, 00:11:03.371 "nvme_admin": false, 00:11:03.371 "nvme_io": false, 00:11:03.371 "nvme_io_md": false, 00:11:03.371 "write_zeroes": true, 00:11:03.372 "zcopy": false, 00:11:03.372 "get_zone_info": false, 00:11:03.372 "zone_management": false, 00:11:03.372 "zone_append": false, 00:11:03.372 "compare": false, 00:11:03.372 "compare_and_write": false, 00:11:03.372 "abort": false, 00:11:03.372 "seek_hole": false, 00:11:03.372 "seek_data": false, 00:11:03.372 "copy": false, 00:11:03.372 "nvme_iov_md": false 00:11:03.372 }, 00:11:03.372 "memory_domains": [ 00:11:03.372 { 00:11:03.372 "dma_device_id": "system", 00:11:03.372 "dma_device_type": 1 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.372 "dma_device_type": 2 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "system", 00:11:03.372 "dma_device_type": 1 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.372 "dma_device_type": 2 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "system", 00:11:03.372 "dma_device_type": 1 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.372 "dma_device_type": 2 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "system", 00:11:03.372 "dma_device_type": 1 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.372 "dma_device_type": 2 00:11:03.372 } 00:11:03.372 ], 00:11:03.372 "driver_specific": { 00:11:03.372 "raid": { 00:11:03.372 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:03.372 "strip_size_kb": 64, 00:11:03.372 "state": "online", 00:11:03.372 "raid_level": "raid0", 00:11:03.372 "superblock": true, 00:11:03.372 "num_base_bdevs": 4, 00:11:03.372 "num_base_bdevs_discovered": 4, 00:11:03.372 "num_base_bdevs_operational": 4, 00:11:03.372 "base_bdevs_list": [ 00:11:03.372 { 00:11:03.372 "name": "pt1", 00:11:03.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "name": "pt2", 00:11:03.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "name": "pt3", 00:11:03.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "name": "pt4", 00:11:03.372 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 } 00:11:03.372 ] 00:11:03.372 } 00:11:03.372 } 00:11:03.372 }' 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.372 pt2 00:11:03.372 pt3 00:11:03.372 pt4' 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.372 12:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.372 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.372 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.372 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.629 [2024-11-06 12:41:52.206980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8 ']' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.629 [2024-11-06 12:41:52.250588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.629 [2024-11-06 12:41:52.250737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.629 [2024-11-06 12:41:52.250878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.629 [2024-11-06 12:41:52.250974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.629 [2024-11-06 12:41:52.250999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.629 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 [2024-11-06 12:41:52.410718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:03.890 [2024-11-06 12:41:52.413702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:03.890 [2024-11-06 12:41:52.413880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:03.890 [2024-11-06 12:41:52.413979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:03.890 [2024-11-06 12:41:52.414222] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:03.890 [2024-11-06 12:41:52.414486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:03.890 [2024-11-06 12:41:52.414721] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:03.890 [2024-11-06 12:41:52.414890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:03.890 [2024-11-06 12:41:52.415100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.890 [2024-11-06 12:41:52.415155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:03.890 request: 00:11:03.890 { 00:11:03.890 "name": "raid_bdev1", 00:11:03.890 "raid_level": "raid0", 00:11:03.890 "base_bdevs": [ 00:11:03.890 "malloc1", 00:11:03.890 "malloc2", 00:11:03.890 "malloc3", 00:11:03.890 "malloc4" 00:11:03.890 ], 00:11:03.890 "strip_size_kb": 64, 00:11:03.890 "superblock": false, 00:11:03.890 "method": "bdev_raid_create", 00:11:03.890 "req_id": 1 00:11:03.890 } 00:11:03.890 Got JSON-RPC error response 00:11:03.890 response: 00:11:03.890 { 00:11:03.890 "code": -17, 00:11:03.890 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:03.890 } 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.890 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.890 [2024-11-06 12:41:52.475615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.891 [2024-11-06 12:41:52.475703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.891 [2024-11-06 12:41:52.475733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.891 [2024-11-06 12:41:52.475751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.891 [2024-11-06 12:41:52.478831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.891 [2024-11-06 12:41:52.479010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.891 [2024-11-06 12:41:52.479142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:03.891 [2024-11-06 12:41:52.479256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.891 pt1 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.891 "name": "raid_bdev1", 00:11:03.891 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:03.891 "strip_size_kb": 64, 00:11:03.891 "state": "configuring", 00:11:03.891 "raid_level": "raid0", 00:11:03.891 "superblock": true, 00:11:03.891 "num_base_bdevs": 4, 00:11:03.891 "num_base_bdevs_discovered": 1, 00:11:03.891 "num_base_bdevs_operational": 4, 00:11:03.891 "base_bdevs_list": [ 00:11:03.891 { 00:11:03.891 "name": "pt1", 00:11:03.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.891 "is_configured": true, 00:11:03.891 "data_offset": 2048, 00:11:03.891 "data_size": 63488 00:11:03.891 }, 00:11:03.891 { 00:11:03.891 "name": null, 00:11:03.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.891 "is_configured": false, 00:11:03.891 "data_offset": 2048, 00:11:03.891 "data_size": 63488 00:11:03.891 }, 00:11:03.891 { 00:11:03.891 "name": null, 00:11:03.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.891 "is_configured": false, 00:11:03.891 "data_offset": 2048, 00:11:03.891 "data_size": 63488 00:11:03.891 }, 00:11:03.891 { 00:11:03.891 "name": null, 00:11:03.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.891 "is_configured": false, 00:11:03.891 "data_offset": 2048, 00:11:03.891 "data_size": 63488 00:11:03.891 } 00:11:03.891 ] 00:11:03.891 }' 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.891 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.457 [2024-11-06 12:41:52.963826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.457 [2024-11-06 12:41:52.963952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.457 [2024-11-06 12:41:52.963983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:04.457 [2024-11-06 12:41:52.964001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.457 [2024-11-06 12:41:52.964645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.457 [2024-11-06 12:41:52.964690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.457 [2024-11-06 12:41:52.964802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.457 [2024-11-06 12:41:52.964848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.457 pt2 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.457 [2024-11-06 12:41:52.975768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.457 12:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.457 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.457 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.457 "name": "raid_bdev1", 00:11:04.457 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:04.457 "strip_size_kb": 64, 00:11:04.457 "state": "configuring", 00:11:04.457 "raid_level": "raid0", 00:11:04.457 "superblock": true, 00:11:04.457 "num_base_bdevs": 4, 00:11:04.457 "num_base_bdevs_discovered": 1, 00:11:04.457 "num_base_bdevs_operational": 4, 00:11:04.457 "base_bdevs_list": [ 00:11:04.457 { 00:11:04.457 "name": "pt1", 00:11:04.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.457 "is_configured": true, 00:11:04.457 "data_offset": 2048, 00:11:04.457 "data_size": 63488 00:11:04.457 }, 00:11:04.457 { 00:11:04.457 "name": null, 00:11:04.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.457 "is_configured": false, 00:11:04.457 "data_offset": 0, 00:11:04.457 "data_size": 63488 00:11:04.457 }, 00:11:04.457 { 00:11:04.457 "name": null, 00:11:04.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.457 "is_configured": false, 00:11:04.457 "data_offset": 2048, 00:11:04.457 "data_size": 63488 00:11:04.457 }, 00:11:04.457 { 00:11:04.457 "name": null, 00:11:04.457 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.457 "is_configured": false, 00:11:04.457 "data_offset": 2048, 00:11:04.457 "data_size": 63488 00:11:04.457 } 00:11:04.457 ] 00:11:04.457 }' 00:11:04.457 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.457 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.023 [2024-11-06 12:41:53.499931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.023 [2024-11-06 12:41:53.500034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.023 [2024-11-06 12:41:53.500068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:05.023 [2024-11-06 12:41:53.500098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.023 [2024-11-06 12:41:53.500779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.023 [2024-11-06 12:41:53.500817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.023 [2024-11-06 12:41:53.500955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.023 [2024-11-06 12:41:53.500988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.023 pt2 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.023 [2024-11-06 12:41:53.507852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.023 [2024-11-06 12:41:53.507909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.023 [2024-11-06 12:41:53.507944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:05.023 [2024-11-06 12:41:53.507960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.023 [2024-11-06 12:41:53.508459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.023 [2024-11-06 12:41:53.508499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.023 [2024-11-06 12:41:53.508579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:05.023 [2024-11-06 12:41:53.508606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.023 pt3 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.023 [2024-11-06 12:41:53.519841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:05.023 [2024-11-06 12:41:53.520036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.023 [2024-11-06 12:41:53.520121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:05.023 [2024-11-06 12:41:53.520270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.023 [2024-11-06 12:41:53.520784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.023 [2024-11-06 12:41:53.520935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:05.023 [2024-11-06 12:41:53.521140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:05.023 [2024-11-06 12:41:53.521291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:05.023 [2024-11-06 12:41:53.521510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.023 [2024-11-06 12:41:53.521531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.023 [2024-11-06 12:41:53.521852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:05.023 [2024-11-06 12:41:53.522052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.023 [2024-11-06 12:41:53.522073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:05.023 [2024-11-06 12:41:53.522270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.023 pt4 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.023 "name": "raid_bdev1", 00:11:05.023 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:05.023 "strip_size_kb": 64, 00:11:05.023 "state": "online", 00:11:05.023 "raid_level": "raid0", 00:11:05.023 "superblock": true, 00:11:05.023 "num_base_bdevs": 4, 00:11:05.023 "num_base_bdevs_discovered": 4, 00:11:05.023 "num_base_bdevs_operational": 4, 00:11:05.023 "base_bdevs_list": [ 00:11:05.023 { 00:11:05.023 "name": "pt1", 00:11:05.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.023 "is_configured": true, 00:11:05.023 "data_offset": 2048, 00:11:05.023 "data_size": 63488 00:11:05.023 }, 00:11:05.023 { 00:11:05.023 "name": "pt2", 00:11:05.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.023 "is_configured": true, 00:11:05.023 "data_offset": 2048, 00:11:05.023 "data_size": 63488 00:11:05.023 }, 00:11:05.023 { 00:11:05.023 "name": "pt3", 00:11:05.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.023 "is_configured": true, 00:11:05.023 "data_offset": 2048, 00:11:05.023 "data_size": 63488 00:11:05.023 }, 00:11:05.023 { 00:11:05.023 "name": "pt4", 00:11:05.023 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.023 "is_configured": true, 00:11:05.023 "data_offset": 2048, 00:11:05.023 "data_size": 63488 00:11:05.023 } 00:11:05.023 ] 00:11:05.023 }' 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.023 12:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.591 [2024-11-06 12:41:54.016507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.591 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.591 "name": "raid_bdev1", 00:11:05.591 "aliases": [ 00:11:05.591 "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8" 00:11:05.591 ], 00:11:05.591 "product_name": "Raid Volume", 00:11:05.591 "block_size": 512, 00:11:05.591 "num_blocks": 253952, 00:11:05.591 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:05.591 "assigned_rate_limits": { 00:11:05.591 "rw_ios_per_sec": 0, 00:11:05.591 "rw_mbytes_per_sec": 0, 00:11:05.592 "r_mbytes_per_sec": 0, 00:11:05.592 "w_mbytes_per_sec": 0 00:11:05.592 }, 00:11:05.592 "claimed": false, 00:11:05.592 "zoned": false, 00:11:05.592 "supported_io_types": { 00:11:05.592 "read": true, 00:11:05.592 "write": true, 00:11:05.592 "unmap": true, 00:11:05.592 "flush": true, 00:11:05.592 "reset": true, 00:11:05.592 "nvme_admin": false, 00:11:05.592 "nvme_io": false, 00:11:05.592 "nvme_io_md": false, 00:11:05.592 "write_zeroes": true, 00:11:05.592 "zcopy": false, 00:11:05.592 "get_zone_info": false, 00:11:05.592 "zone_management": false, 00:11:05.592 "zone_append": false, 00:11:05.592 "compare": false, 00:11:05.592 "compare_and_write": false, 00:11:05.592 "abort": false, 00:11:05.592 "seek_hole": false, 00:11:05.592 "seek_data": false, 00:11:05.592 "copy": false, 00:11:05.592 "nvme_iov_md": false 00:11:05.592 }, 00:11:05.592 "memory_domains": [ 00:11:05.592 { 00:11:05.592 "dma_device_id": "system", 00:11:05.592 "dma_device_type": 1 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.592 "dma_device_type": 2 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "system", 00:11:05.592 "dma_device_type": 1 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.592 "dma_device_type": 2 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "system", 00:11:05.592 "dma_device_type": 1 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.592 "dma_device_type": 2 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "system", 00:11:05.592 "dma_device_type": 1 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.592 "dma_device_type": 2 00:11:05.592 } 00:11:05.592 ], 00:11:05.592 "driver_specific": { 00:11:05.592 "raid": { 00:11:05.592 "uuid": "7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8", 00:11:05.592 "strip_size_kb": 64, 00:11:05.592 "state": "online", 00:11:05.592 "raid_level": "raid0", 00:11:05.592 "superblock": true, 00:11:05.592 "num_base_bdevs": 4, 00:11:05.592 "num_base_bdevs_discovered": 4, 00:11:05.592 "num_base_bdevs_operational": 4, 00:11:05.592 "base_bdevs_list": [ 00:11:05.592 { 00:11:05.592 "name": "pt1", 00:11:05.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.592 "is_configured": true, 00:11:05.592 "data_offset": 2048, 00:11:05.592 "data_size": 63488 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "name": "pt2", 00:11:05.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.592 "is_configured": true, 00:11:05.592 "data_offset": 2048, 00:11:05.592 "data_size": 63488 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "name": "pt3", 00:11:05.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.592 "is_configured": true, 00:11:05.592 "data_offset": 2048, 00:11:05.592 "data_size": 63488 00:11:05.592 }, 00:11:05.592 { 00:11:05.592 "name": "pt4", 00:11:05.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.592 "is_configured": true, 00:11:05.592 "data_offset": 2048, 00:11:05.592 "data_size": 63488 00:11:05.592 } 00:11:05.592 ] 00:11:05.592 } 00:11:05.592 } 00:11:05.592 }' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.592 pt2 00:11:05.592 pt3 00:11:05.592 pt4' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.851 [2024-11-06 12:41:54.408565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8 '!=' 7a9a1ae2-d4b4-4ce1-b177-cb616eef92e8 ']' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70857 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70857 ']' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70857 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70857 00:11:05.851 killing process with pid 70857 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70857' 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70857 00:11:05.851 [2024-11-06 12:41:54.480141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.851 12:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70857 00:11:05.851 [2024-11-06 12:41:54.480300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.851 [2024-11-06 12:41:54.480396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.851 [2024-11-06 12:41:54.480412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:06.418 [2024-11-06 12:41:54.920538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.801 12:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:07.801 00:11:07.801 real 0m6.196s 00:11:07.801 user 0m9.159s 00:11:07.801 sys 0m0.938s 00:11:07.801 ************************************ 00:11:07.801 END TEST raid_superblock_test 00:11:07.801 ************************************ 00:11:07.801 12:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.801 12:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.801 12:41:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:07.801 12:41:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:07.801 12:41:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.801 12:41:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.801 ************************************ 00:11:07.801 START TEST raid_read_error_test 00:11:07.801 ************************************ 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lNuvF3lBf6 00:11:07.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71123 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71123 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71123 ']' 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.801 12:41:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.801 [2024-11-06 12:41:56.243724] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:07.801 [2024-11-06 12:41:56.244036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71123 ] 00:11:07.801 [2024-11-06 12:41:56.432457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.060 [2024-11-06 12:41:56.563493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.319 [2024-11-06 12:41:56.772138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.319 [2024-11-06 12:41:56.772182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 BaseBdev1_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 true 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 [2024-11-06 12:41:57.428468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.886 [2024-11-06 12:41:57.428583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.886 [2024-11-06 12:41:57.428641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.886 [2024-11-06 12:41:57.428674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.886 [2024-11-06 12:41:57.432781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.886 [2024-11-06 12:41:57.433160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.886 BaseBdev1 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 BaseBdev2_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 true 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 [2024-11-06 12:41:57.502391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.886 [2024-11-06 12:41:57.502784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.886 [2024-11-06 12:41:57.502838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.886 [2024-11-06 12:41:57.502870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.886 [2024-11-06 12:41:57.506581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.886 [2024-11-06 12:41:57.506674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.886 BaseBdev2 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.886 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 BaseBdev3_malloc 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 true 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 [2024-11-06 12:41:57.574487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.144 [2024-11-06 12:41:57.574572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.144 [2024-11-06 12:41:57.574616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.144 [2024-11-06 12:41:57.574634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.144 [2024-11-06 12:41:57.577575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.144 [2024-11-06 12:41:57.577627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.144 BaseBdev3 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 BaseBdev4_malloc 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 true 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 [2024-11-06 12:41:57.631365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:09.144 [2024-11-06 12:41:57.631672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.144 [2024-11-06 12:41:57.631721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.144 [2024-11-06 12:41:57.631752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.144 [2024-11-06 12:41:57.634475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.144 [2024-11-06 12:41:57.634523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:09.144 BaseBdev4 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 [2024-11-06 12:41:57.639512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.144 [2024-11-06 12:41:57.641827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.144 [2024-11-06 12:41:57.642082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.144 [2024-11-06 12:41:57.642223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.144 [2024-11-06 12:41:57.642517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:09.144 [2024-11-06 12:41:57.642545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.144 [2024-11-06 12:41:57.642854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:09.144 [2024-11-06 12:41:57.643071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:09.144 [2024-11-06 12:41:57.643089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:09.144 [2024-11-06 12:41:57.643285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.144 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.144 "name": "raid_bdev1", 00:11:09.144 "uuid": "cd35d03f-f58d-40af-9145-3d20ba29e98e", 00:11:09.144 "strip_size_kb": 64, 00:11:09.144 "state": "online", 00:11:09.145 "raid_level": "raid0", 00:11:09.145 "superblock": true, 00:11:09.145 "num_base_bdevs": 4, 00:11:09.145 "num_base_bdevs_discovered": 4, 00:11:09.145 "num_base_bdevs_operational": 4, 00:11:09.145 "base_bdevs_list": [ 00:11:09.145 { 00:11:09.145 "name": "BaseBdev1", 00:11:09.145 "uuid": "d14511ca-46cb-53bc-95cc-4332021e3efb", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev2", 00:11:09.145 "uuid": "1229f42e-0a2b-5b8c-b4a0-65759632e658", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev3", 00:11:09.145 "uuid": "0d4bd342-38d6-54bc-a227-854410545b18", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 }, 00:11:09.145 { 00:11:09.145 "name": "BaseBdev4", 00:11:09.145 "uuid": "367f5ff0-6b43-5b5f-bc09-5abfc1aaac6c", 00:11:09.145 "is_configured": true, 00:11:09.145 "data_offset": 2048, 00:11:09.145 "data_size": 63488 00:11:09.145 } 00:11:09.145 ] 00:11:09.145 }' 00:11:09.145 12:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.145 12:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.715 12:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.715 12:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.715 [2024-11-06 12:41:58.253139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.650 "name": "raid_bdev1", 00:11:10.650 "uuid": "cd35d03f-f58d-40af-9145-3d20ba29e98e", 00:11:10.650 "strip_size_kb": 64, 00:11:10.650 "state": "online", 00:11:10.650 "raid_level": "raid0", 00:11:10.650 "superblock": true, 00:11:10.650 "num_base_bdevs": 4, 00:11:10.650 "num_base_bdevs_discovered": 4, 00:11:10.650 "num_base_bdevs_operational": 4, 00:11:10.650 "base_bdevs_list": [ 00:11:10.650 { 00:11:10.650 "name": "BaseBdev1", 00:11:10.650 "uuid": "d14511ca-46cb-53bc-95cc-4332021e3efb", 00:11:10.650 "is_configured": true, 00:11:10.650 "data_offset": 2048, 00:11:10.650 "data_size": 63488 00:11:10.650 }, 00:11:10.650 { 00:11:10.650 "name": "BaseBdev2", 00:11:10.650 "uuid": "1229f42e-0a2b-5b8c-b4a0-65759632e658", 00:11:10.650 "is_configured": true, 00:11:10.650 "data_offset": 2048, 00:11:10.650 "data_size": 63488 00:11:10.650 }, 00:11:10.650 { 00:11:10.650 "name": "BaseBdev3", 00:11:10.650 "uuid": "0d4bd342-38d6-54bc-a227-854410545b18", 00:11:10.650 "is_configured": true, 00:11:10.650 "data_offset": 2048, 00:11:10.650 "data_size": 63488 00:11:10.650 }, 00:11:10.650 { 00:11:10.650 "name": "BaseBdev4", 00:11:10.650 "uuid": "367f5ff0-6b43-5b5f-bc09-5abfc1aaac6c", 00:11:10.650 "is_configured": true, 00:11:10.650 "data_offset": 2048, 00:11:10.650 "data_size": 63488 00:11:10.650 } 00:11:10.650 ] 00:11:10.650 }' 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.650 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.216 [2024-11-06 12:41:59.676301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.216 [2024-11-06 12:41:59.676573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.216 [2024-11-06 12:41:59.679934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.216 { 00:11:11.216 "results": [ 00:11:11.216 { 00:11:11.216 "job": "raid_bdev1", 00:11:11.216 "core_mask": "0x1", 00:11:11.216 "workload": "randrw", 00:11:11.216 "percentage": 50, 00:11:11.216 "status": "finished", 00:11:11.216 "queue_depth": 1, 00:11:11.216 "io_size": 131072, 00:11:11.216 "runtime": 1.420603, 00:11:11.216 "iops": 10853.841643302176, 00:11:11.216 "mibps": 1356.730205412772, 00:11:11.216 "io_failed": 1, 00:11:11.216 "io_timeout": 0, 00:11:11.216 "avg_latency_us": 128.77918358684119, 00:11:11.216 "min_latency_us": 38.86545454545455, 00:11:11.216 "max_latency_us": 1742.6618181818183 00:11:11.216 } 00:11:11.216 ], 00:11:11.216 "core_count": 1 00:11:11.216 } 00:11:11.216 [2024-11-06 12:41:59.680154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.216 [2024-11-06 12:41:59.680246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.216 [2024-11-06 12:41:59.680270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71123 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71123 ']' 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71123 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71123 00:11:11.216 killing process with pid 71123 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71123' 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71123 00:11:11.216 [2024-11-06 12:41:59.712755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.216 12:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71123 00:11:11.473 [2024-11-06 12:42:00.000657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lNuvF3lBf6 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:12.846 00:11:12.846 real 0m4.980s 00:11:12.846 user 0m6.161s 00:11:12.846 sys 0m0.634s 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.846 ************************************ 00:11:12.846 END TEST raid_read_error_test 00:11:12.846 ************************************ 00:11:12.846 12:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.846 12:42:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:12.846 12:42:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:12.846 12:42:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.846 12:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.846 ************************************ 00:11:12.846 START TEST raid_write_error_test 00:11:12.846 ************************************ 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.11DpWqk69R 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71274 00:11:12.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71274 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71274 ']' 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:12.846 12:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.846 [2024-11-06 12:42:01.257112] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:12.846 [2024-11-06 12:42:01.257550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:11:12.846 [2024-11-06 12:42:01.433042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.104 [2024-11-06 12:42:01.556706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.368 [2024-11-06 12:42:01.761331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.368 [2024-11-06 12:42:01.761651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.627 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.627 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:13.627 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.627 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.627 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.627 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.886 BaseBdev1_malloc 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.886 true 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.886 [2024-11-06 12:42:02.323992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:13.886 [2024-11-06 12:42:02.324317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.886 [2024-11-06 12:42:02.324359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:13.886 [2024-11-06 12:42:02.324379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.886 [2024-11-06 12:42:02.327288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.886 [2024-11-06 12:42:02.327367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.886 BaseBdev1 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.886 BaseBdev2_malloc 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:13.886 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 true 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 [2024-11-06 12:42:02.384271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:13.887 [2024-11-06 12:42:02.384369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.887 [2024-11-06 12:42:02.384398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:13.887 [2024-11-06 12:42:02.384415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.887 [2024-11-06 12:42:02.387216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.887 [2024-11-06 12:42:02.387294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.887 BaseBdev2 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 BaseBdev3_malloc 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 true 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 [2024-11-06 12:42:02.454058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.887 [2024-11-06 12:42:02.454151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.887 [2024-11-06 12:42:02.454178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.887 [2024-11-06 12:42:02.454195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.887 [2024-11-06 12:42:02.457121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.887 [2024-11-06 12:42:02.457409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.887 BaseBdev3 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 BaseBdev4_malloc 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 true 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 [2024-11-06 12:42:02.509960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:13.887 [2024-11-06 12:42:02.510042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.887 [2024-11-06 12:42:02.510080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.887 [2024-11-06 12:42:02.510098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.887 [2024-11-06 12:42:02.513035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.887 [2024-11-06 12:42:02.513090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:13.887 BaseBdev4 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 [2024-11-06 12:42:02.518060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.887 [2024-11-06 12:42:02.520621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.887 [2024-11-06 12:42:02.520953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.887 [2024-11-06 12:42:02.521070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.887 [2024-11-06 12:42:02.521450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:13.887 [2024-11-06 12:42:02.521479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.887 [2024-11-06 12:42:02.521808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:13.887 [2024-11-06 12:42:02.522042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:13.887 [2024-11-06 12:42:02.522063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:13.887 [2024-11-06 12:42:02.522323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.887 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.148 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.148 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.148 "name": "raid_bdev1", 00:11:14.148 "uuid": "5652be59-b443-4228-b697-def6c76d90ac", 00:11:14.148 "strip_size_kb": 64, 00:11:14.148 "state": "online", 00:11:14.148 "raid_level": "raid0", 00:11:14.148 "superblock": true, 00:11:14.148 "num_base_bdevs": 4, 00:11:14.148 "num_base_bdevs_discovered": 4, 00:11:14.148 "num_base_bdevs_operational": 4, 00:11:14.148 "base_bdevs_list": [ 00:11:14.148 { 00:11:14.148 "name": "BaseBdev1", 00:11:14.148 "uuid": "cbafe7b6-8eaa-52c1-8d9b-34d9acad4e50", 00:11:14.148 "is_configured": true, 00:11:14.148 "data_offset": 2048, 00:11:14.148 "data_size": 63488 00:11:14.148 }, 00:11:14.148 { 00:11:14.148 "name": "BaseBdev2", 00:11:14.148 "uuid": "487c112c-8814-5911-8caf-da534d0d6acd", 00:11:14.148 "is_configured": true, 00:11:14.148 "data_offset": 2048, 00:11:14.148 "data_size": 63488 00:11:14.148 }, 00:11:14.148 { 00:11:14.148 "name": "BaseBdev3", 00:11:14.148 "uuid": "6a79edf7-5bfe-5a2c-97f9-1853ac2df94b", 00:11:14.148 "is_configured": true, 00:11:14.148 "data_offset": 2048, 00:11:14.148 "data_size": 63488 00:11:14.148 }, 00:11:14.148 { 00:11:14.148 "name": "BaseBdev4", 00:11:14.148 "uuid": "03a388d3-92b7-553c-9b52-8ef6f3a64451", 00:11:14.148 "is_configured": true, 00:11:14.148 "data_offset": 2048, 00:11:14.148 "data_size": 63488 00:11:14.148 } 00:11:14.148 ] 00:11:14.148 }' 00:11:14.148 12:42:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.148 12:42:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.406 12:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.406 12:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.664 [2024-11-06 12:42:03.184031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.598 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.599 "name": "raid_bdev1", 00:11:15.599 "uuid": "5652be59-b443-4228-b697-def6c76d90ac", 00:11:15.599 "strip_size_kb": 64, 00:11:15.599 "state": "online", 00:11:15.599 "raid_level": "raid0", 00:11:15.599 "superblock": true, 00:11:15.599 "num_base_bdevs": 4, 00:11:15.599 "num_base_bdevs_discovered": 4, 00:11:15.599 "num_base_bdevs_operational": 4, 00:11:15.599 "base_bdevs_list": [ 00:11:15.599 { 00:11:15.599 "name": "BaseBdev1", 00:11:15.599 "uuid": "cbafe7b6-8eaa-52c1-8d9b-34d9acad4e50", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev2", 00:11:15.599 "uuid": "487c112c-8814-5911-8caf-da534d0d6acd", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev3", 00:11:15.599 "uuid": "6a79edf7-5bfe-5a2c-97f9-1853ac2df94b", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev4", 00:11:15.599 "uuid": "03a388d3-92b7-553c-9b52-8ef6f3a64451", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 } 00:11:15.599 ] 00:11:15.599 }' 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.599 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.190 [2024-11-06 12:42:04.602540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.190 [2024-11-06 12:42:04.602928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.190 [2024-11-06 12:42:04.606440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.190 { 00:11:16.190 "results": [ 00:11:16.190 { 00:11:16.190 "job": "raid_bdev1", 00:11:16.190 "core_mask": "0x1", 00:11:16.190 "workload": "randrw", 00:11:16.190 "percentage": 50, 00:11:16.190 "status": "finished", 00:11:16.190 "queue_depth": 1, 00:11:16.190 "io_size": 131072, 00:11:16.190 "runtime": 1.416687, 00:11:16.190 "iops": 10871.138084841607, 00:11:16.190 "mibps": 1358.8922606052008, 00:11:16.190 "io_failed": 1, 00:11:16.190 "io_timeout": 0, 00:11:16.190 "avg_latency_us": 128.63712150724226, 00:11:16.190 "min_latency_us": 38.63272727272727, 00:11:16.190 "max_latency_us": 1884.16 00:11:16.190 } 00:11:16.190 ], 00:11:16.190 "core_count": 1 00:11:16.190 } 00:11:16.190 [2024-11-06 12:42:04.606725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.190 [2024-11-06 12:42:04.606804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.190 [2024-11-06 12:42:04.606825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71274 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71274 ']' 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71274 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71274 00:11:16.190 killing process with pid 71274 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71274' 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71274 00:11:16.190 [2024-11-06 12:42:04.649324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.190 12:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71274 00:11:16.449 [2024-11-06 12:42:04.944111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.11DpWqk69R 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:17.384 00:11:17.384 real 0m4.879s 00:11:17.384 user 0m6.027s 00:11:17.384 sys 0m0.632s 00:11:17.384 ************************************ 00:11:17.384 END TEST raid_write_error_test 00:11:17.384 ************************************ 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.384 12:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.643 12:42:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:17.643 12:42:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:17.643 12:42:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:17.643 12:42:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.643 12:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.643 ************************************ 00:11:17.643 START TEST raid_state_function_test 00:11:17.643 ************************************ 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:17.643 Process raid pid: 71418 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71418 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71418' 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71418 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71418 ']' 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.643 12:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.643 [2024-11-06 12:42:06.200826] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:17.643 [2024-11-06 12:42:06.201031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.902 [2024-11-06 12:42:06.391365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.902 [2024-11-06 12:42:06.523015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.160 [2024-11-06 12:42:06.722373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.160 [2024-11-06 12:42:06.722668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.727 [2024-11-06 12:42:07.190444] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.727 [2024-11-06 12:42:07.190547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.727 [2024-11-06 12:42:07.190565] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.727 [2024-11-06 12:42:07.190581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.727 [2024-11-06 12:42:07.190591] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.727 [2024-11-06 12:42:07.190605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.727 [2024-11-06 12:42:07.190614] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.727 [2024-11-06 12:42:07.190628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.727 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.728 "name": "Existed_Raid", 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.728 "strip_size_kb": 64, 00:11:18.728 "state": "configuring", 00:11:18.728 "raid_level": "concat", 00:11:18.728 "superblock": false, 00:11:18.728 "num_base_bdevs": 4, 00:11:18.728 "num_base_bdevs_discovered": 0, 00:11:18.728 "num_base_bdevs_operational": 4, 00:11:18.728 "base_bdevs_list": [ 00:11:18.728 { 00:11:18.728 "name": "BaseBdev1", 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 0, 00:11:18.728 "data_size": 0 00:11:18.728 }, 00:11:18.728 { 00:11:18.728 "name": "BaseBdev2", 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 0, 00:11:18.728 "data_size": 0 00:11:18.728 }, 00:11:18.728 { 00:11:18.728 "name": "BaseBdev3", 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 0, 00:11:18.728 "data_size": 0 00:11:18.728 }, 00:11:18.728 { 00:11:18.728 "name": "BaseBdev4", 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 0, 00:11:18.728 "data_size": 0 00:11:18.728 } 00:11:18.728 ] 00:11:18.728 }' 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.728 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 [2024-11-06 12:42:07.694554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.295 [2024-11-06 12:42:07.694637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 [2024-11-06 12:42:07.706534] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.295 [2024-11-06 12:42:07.706620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.295 [2024-11-06 12:42:07.706637] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.295 [2024-11-06 12:42:07.706653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.295 [2024-11-06 12:42:07.706663] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.295 [2024-11-06 12:42:07.706677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.295 [2024-11-06 12:42:07.706686] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.295 [2024-11-06 12:42:07.706700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 [2024-11-06 12:42:07.756197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.295 BaseBdev1 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 [ 00:11:19.295 { 00:11:19.295 "name": "BaseBdev1", 00:11:19.295 "aliases": [ 00:11:19.295 "5a7ba7de-6836-45fd-84a0-7e15bf3713fc" 00:11:19.295 ], 00:11:19.295 "product_name": "Malloc disk", 00:11:19.295 "block_size": 512, 00:11:19.295 "num_blocks": 65536, 00:11:19.295 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:19.295 "assigned_rate_limits": { 00:11:19.295 "rw_ios_per_sec": 0, 00:11:19.295 "rw_mbytes_per_sec": 0, 00:11:19.295 "r_mbytes_per_sec": 0, 00:11:19.295 "w_mbytes_per_sec": 0 00:11:19.295 }, 00:11:19.295 "claimed": true, 00:11:19.295 "claim_type": "exclusive_write", 00:11:19.295 "zoned": false, 00:11:19.295 "supported_io_types": { 00:11:19.295 "read": true, 00:11:19.295 "write": true, 00:11:19.295 "unmap": true, 00:11:19.295 "flush": true, 00:11:19.295 "reset": true, 00:11:19.295 "nvme_admin": false, 00:11:19.295 "nvme_io": false, 00:11:19.295 "nvme_io_md": false, 00:11:19.295 "write_zeroes": true, 00:11:19.295 "zcopy": true, 00:11:19.295 "get_zone_info": false, 00:11:19.295 "zone_management": false, 00:11:19.295 "zone_append": false, 00:11:19.295 "compare": false, 00:11:19.295 "compare_and_write": false, 00:11:19.295 "abort": true, 00:11:19.295 "seek_hole": false, 00:11:19.295 "seek_data": false, 00:11:19.295 "copy": true, 00:11:19.295 "nvme_iov_md": false 00:11:19.295 }, 00:11:19.295 "memory_domains": [ 00:11:19.295 { 00:11:19.295 "dma_device_id": "system", 00:11:19.295 "dma_device_type": 1 00:11:19.295 }, 00:11:19.295 { 00:11:19.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.295 "dma_device_type": 2 00:11:19.295 } 00:11:19.295 ], 00:11:19.295 "driver_specific": {} 00:11:19.295 } 00:11:19.295 ] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.295 "name": "Existed_Raid", 00:11:19.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.295 "strip_size_kb": 64, 00:11:19.295 "state": "configuring", 00:11:19.295 "raid_level": "concat", 00:11:19.295 "superblock": false, 00:11:19.295 "num_base_bdevs": 4, 00:11:19.295 "num_base_bdevs_discovered": 1, 00:11:19.295 "num_base_bdevs_operational": 4, 00:11:19.295 "base_bdevs_list": [ 00:11:19.295 { 00:11:19.295 "name": "BaseBdev1", 00:11:19.295 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:19.295 "is_configured": true, 00:11:19.295 "data_offset": 0, 00:11:19.295 "data_size": 65536 00:11:19.295 }, 00:11:19.295 { 00:11:19.295 "name": "BaseBdev2", 00:11:19.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.295 "is_configured": false, 00:11:19.295 "data_offset": 0, 00:11:19.295 "data_size": 0 00:11:19.295 }, 00:11:19.295 { 00:11:19.295 "name": "BaseBdev3", 00:11:19.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.295 "is_configured": false, 00:11:19.295 "data_offset": 0, 00:11:19.295 "data_size": 0 00:11:19.295 }, 00:11:19.295 { 00:11:19.295 "name": "BaseBdev4", 00:11:19.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.295 "is_configured": false, 00:11:19.295 "data_offset": 0, 00:11:19.295 "data_size": 0 00:11:19.295 } 00:11:19.295 ] 00:11:19.295 }' 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.295 12:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.863 [2024-11-06 12:42:08.320500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.863 [2024-11-06 12:42:08.320631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.863 [2024-11-06 12:42:08.332512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.863 [2024-11-06 12:42:08.335177] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.863 [2024-11-06 12:42:08.335275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.863 [2024-11-06 12:42:08.335294] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.863 [2024-11-06 12:42:08.335312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.863 [2024-11-06 12:42:08.335322] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.863 [2024-11-06 12:42:08.335349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.863 "name": "Existed_Raid", 00:11:19.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.863 "strip_size_kb": 64, 00:11:19.863 "state": "configuring", 00:11:19.863 "raid_level": "concat", 00:11:19.863 "superblock": false, 00:11:19.863 "num_base_bdevs": 4, 00:11:19.863 "num_base_bdevs_discovered": 1, 00:11:19.863 "num_base_bdevs_operational": 4, 00:11:19.863 "base_bdevs_list": [ 00:11:19.863 { 00:11:19.863 "name": "BaseBdev1", 00:11:19.863 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:19.863 "is_configured": true, 00:11:19.863 "data_offset": 0, 00:11:19.863 "data_size": 65536 00:11:19.863 }, 00:11:19.863 { 00:11:19.863 "name": "BaseBdev2", 00:11:19.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.863 "is_configured": false, 00:11:19.863 "data_offset": 0, 00:11:19.863 "data_size": 0 00:11:19.863 }, 00:11:19.863 { 00:11:19.863 "name": "BaseBdev3", 00:11:19.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.863 "is_configured": false, 00:11:19.863 "data_offset": 0, 00:11:19.863 "data_size": 0 00:11:19.863 }, 00:11:19.863 { 00:11:19.863 "name": "BaseBdev4", 00:11:19.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.863 "is_configured": false, 00:11:19.863 "data_offset": 0, 00:11:19.863 "data_size": 0 00:11:19.863 } 00:11:19.863 ] 00:11:19.863 }' 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.863 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.431 [2024-11-06 12:42:08.912064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.431 BaseBdev2 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.431 [ 00:11:20.431 { 00:11:20.431 "name": "BaseBdev2", 00:11:20.431 "aliases": [ 00:11:20.431 "17dff91c-94b9-4aa6-a864-00c45ed0169c" 00:11:20.431 ], 00:11:20.431 "product_name": "Malloc disk", 00:11:20.431 "block_size": 512, 00:11:20.431 "num_blocks": 65536, 00:11:20.431 "uuid": "17dff91c-94b9-4aa6-a864-00c45ed0169c", 00:11:20.431 "assigned_rate_limits": { 00:11:20.431 "rw_ios_per_sec": 0, 00:11:20.431 "rw_mbytes_per_sec": 0, 00:11:20.431 "r_mbytes_per_sec": 0, 00:11:20.431 "w_mbytes_per_sec": 0 00:11:20.431 }, 00:11:20.431 "claimed": true, 00:11:20.431 "claim_type": "exclusive_write", 00:11:20.431 "zoned": false, 00:11:20.431 "supported_io_types": { 00:11:20.431 "read": true, 00:11:20.431 "write": true, 00:11:20.431 "unmap": true, 00:11:20.431 "flush": true, 00:11:20.431 "reset": true, 00:11:20.431 "nvme_admin": false, 00:11:20.431 "nvme_io": false, 00:11:20.431 "nvme_io_md": false, 00:11:20.431 "write_zeroes": true, 00:11:20.431 "zcopy": true, 00:11:20.431 "get_zone_info": false, 00:11:20.431 "zone_management": false, 00:11:20.431 "zone_append": false, 00:11:20.431 "compare": false, 00:11:20.431 "compare_and_write": false, 00:11:20.431 "abort": true, 00:11:20.431 "seek_hole": false, 00:11:20.431 "seek_data": false, 00:11:20.431 "copy": true, 00:11:20.431 "nvme_iov_md": false 00:11:20.431 }, 00:11:20.431 "memory_domains": [ 00:11:20.431 { 00:11:20.431 "dma_device_id": "system", 00:11:20.431 "dma_device_type": 1 00:11:20.431 }, 00:11:20.431 { 00:11:20.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.431 "dma_device_type": 2 00:11:20.431 } 00:11:20.431 ], 00:11:20.431 "driver_specific": {} 00:11:20.431 } 00:11:20.431 ] 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.431 12:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.431 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.431 "name": "Existed_Raid", 00:11:20.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.431 "strip_size_kb": 64, 00:11:20.431 "state": "configuring", 00:11:20.431 "raid_level": "concat", 00:11:20.431 "superblock": false, 00:11:20.431 "num_base_bdevs": 4, 00:11:20.431 "num_base_bdevs_discovered": 2, 00:11:20.431 "num_base_bdevs_operational": 4, 00:11:20.431 "base_bdevs_list": [ 00:11:20.431 { 00:11:20.431 "name": "BaseBdev1", 00:11:20.431 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:20.431 "is_configured": true, 00:11:20.431 "data_offset": 0, 00:11:20.431 "data_size": 65536 00:11:20.431 }, 00:11:20.431 { 00:11:20.431 "name": "BaseBdev2", 00:11:20.431 "uuid": "17dff91c-94b9-4aa6-a864-00c45ed0169c", 00:11:20.431 "is_configured": true, 00:11:20.431 "data_offset": 0, 00:11:20.431 "data_size": 65536 00:11:20.431 }, 00:11:20.431 { 00:11:20.431 "name": "BaseBdev3", 00:11:20.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.431 "is_configured": false, 00:11:20.431 "data_offset": 0, 00:11:20.431 "data_size": 0 00:11:20.431 }, 00:11:20.431 { 00:11:20.431 "name": "BaseBdev4", 00:11:20.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.431 "is_configured": false, 00:11:20.431 "data_offset": 0, 00:11:20.431 "data_size": 0 00:11:20.431 } 00:11:20.431 ] 00:11:20.431 }' 00:11:20.431 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.431 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 [2024-11-06 12:42:09.496605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.998 BaseBdev3 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 [ 00:11:20.998 { 00:11:20.998 "name": "BaseBdev3", 00:11:20.998 "aliases": [ 00:11:20.998 "89d5da60-8e2a-46f2-b56a-6ee650077e01" 00:11:20.998 ], 00:11:20.998 "product_name": "Malloc disk", 00:11:20.998 "block_size": 512, 00:11:20.998 "num_blocks": 65536, 00:11:20.998 "uuid": "89d5da60-8e2a-46f2-b56a-6ee650077e01", 00:11:20.998 "assigned_rate_limits": { 00:11:20.998 "rw_ios_per_sec": 0, 00:11:20.998 "rw_mbytes_per_sec": 0, 00:11:20.998 "r_mbytes_per_sec": 0, 00:11:20.998 "w_mbytes_per_sec": 0 00:11:20.998 }, 00:11:20.998 "claimed": true, 00:11:20.998 "claim_type": "exclusive_write", 00:11:20.998 "zoned": false, 00:11:20.998 "supported_io_types": { 00:11:20.998 "read": true, 00:11:20.998 "write": true, 00:11:20.998 "unmap": true, 00:11:20.998 "flush": true, 00:11:20.998 "reset": true, 00:11:20.998 "nvme_admin": false, 00:11:20.998 "nvme_io": false, 00:11:20.998 "nvme_io_md": false, 00:11:20.998 "write_zeroes": true, 00:11:20.998 "zcopy": true, 00:11:20.998 "get_zone_info": false, 00:11:20.998 "zone_management": false, 00:11:20.998 "zone_append": false, 00:11:20.998 "compare": false, 00:11:20.998 "compare_and_write": false, 00:11:20.998 "abort": true, 00:11:20.998 "seek_hole": false, 00:11:20.998 "seek_data": false, 00:11:20.998 "copy": true, 00:11:20.998 "nvme_iov_md": false 00:11:20.998 }, 00:11:20.998 "memory_domains": [ 00:11:20.998 { 00:11:20.998 "dma_device_id": "system", 00:11:20.998 "dma_device_type": 1 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.998 "dma_device_type": 2 00:11:20.998 } 00:11:20.998 ], 00:11:20.998 "driver_specific": {} 00:11:20.998 } 00:11:20.998 ] 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.998 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.998 "name": "Existed_Raid", 00:11:20.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.998 "strip_size_kb": 64, 00:11:20.998 "state": "configuring", 00:11:20.998 "raid_level": "concat", 00:11:20.998 "superblock": false, 00:11:20.998 "num_base_bdevs": 4, 00:11:20.998 "num_base_bdevs_discovered": 3, 00:11:20.998 "num_base_bdevs_operational": 4, 00:11:20.998 "base_bdevs_list": [ 00:11:20.998 { 00:11:20.998 "name": "BaseBdev1", 00:11:20.998 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:20.998 "is_configured": true, 00:11:20.998 "data_offset": 0, 00:11:20.998 "data_size": 65536 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "name": "BaseBdev2", 00:11:20.998 "uuid": "17dff91c-94b9-4aa6-a864-00c45ed0169c", 00:11:20.998 "is_configured": true, 00:11:20.998 "data_offset": 0, 00:11:20.998 "data_size": 65536 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "name": "BaseBdev3", 00:11:20.998 "uuid": "89d5da60-8e2a-46f2-b56a-6ee650077e01", 00:11:20.998 "is_configured": true, 00:11:20.998 "data_offset": 0, 00:11:20.998 "data_size": 65536 00:11:20.998 }, 00:11:20.998 { 00:11:20.998 "name": "BaseBdev4", 00:11:20.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.998 "is_configured": false, 00:11:20.998 "data_offset": 0, 00:11:20.998 "data_size": 0 00:11:20.999 } 00:11:20.999 ] 00:11:20.999 }' 00:11:20.999 12:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.999 12:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.575 [2024-11-06 12:42:10.079322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.575 [2024-11-06 12:42:10.079627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.575 [2024-11-06 12:42:10.079653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:21.575 [2024-11-06 12:42:10.080054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.575 [2024-11-06 12:42:10.080334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.575 [2024-11-06 12:42:10.080357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:21.575 [2024-11-06 12:42:10.080728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.575 BaseBdev4 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.575 [ 00:11:21.575 { 00:11:21.575 "name": "BaseBdev4", 00:11:21.575 "aliases": [ 00:11:21.575 "7059f730-c186-48df-af93-d9e2076c2faf" 00:11:21.575 ], 00:11:21.575 "product_name": "Malloc disk", 00:11:21.575 "block_size": 512, 00:11:21.575 "num_blocks": 65536, 00:11:21.575 "uuid": "7059f730-c186-48df-af93-d9e2076c2faf", 00:11:21.575 "assigned_rate_limits": { 00:11:21.575 "rw_ios_per_sec": 0, 00:11:21.575 "rw_mbytes_per_sec": 0, 00:11:21.575 "r_mbytes_per_sec": 0, 00:11:21.575 "w_mbytes_per_sec": 0 00:11:21.575 }, 00:11:21.575 "claimed": true, 00:11:21.575 "claim_type": "exclusive_write", 00:11:21.575 "zoned": false, 00:11:21.575 "supported_io_types": { 00:11:21.575 "read": true, 00:11:21.575 "write": true, 00:11:21.575 "unmap": true, 00:11:21.575 "flush": true, 00:11:21.575 "reset": true, 00:11:21.575 "nvme_admin": false, 00:11:21.575 "nvme_io": false, 00:11:21.575 "nvme_io_md": false, 00:11:21.575 "write_zeroes": true, 00:11:21.575 "zcopy": true, 00:11:21.575 "get_zone_info": false, 00:11:21.575 "zone_management": false, 00:11:21.575 "zone_append": false, 00:11:21.575 "compare": false, 00:11:21.575 "compare_and_write": false, 00:11:21.575 "abort": true, 00:11:21.575 "seek_hole": false, 00:11:21.575 "seek_data": false, 00:11:21.575 "copy": true, 00:11:21.575 "nvme_iov_md": false 00:11:21.575 }, 00:11:21.575 "memory_domains": [ 00:11:21.575 { 00:11:21.575 "dma_device_id": "system", 00:11:21.575 "dma_device_type": 1 00:11:21.575 }, 00:11:21.575 { 00:11:21.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.575 "dma_device_type": 2 00:11:21.575 } 00:11:21.575 ], 00:11:21.575 "driver_specific": {} 00:11:21.575 } 00:11:21.575 ] 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.575 "name": "Existed_Raid", 00:11:21.575 "uuid": "62f2e67b-b2fe-49f9-805b-5ff5f8fea120", 00:11:21.575 "strip_size_kb": 64, 00:11:21.575 "state": "online", 00:11:21.575 "raid_level": "concat", 00:11:21.575 "superblock": false, 00:11:21.575 "num_base_bdevs": 4, 00:11:21.575 "num_base_bdevs_discovered": 4, 00:11:21.575 "num_base_bdevs_operational": 4, 00:11:21.575 "base_bdevs_list": [ 00:11:21.575 { 00:11:21.575 "name": "BaseBdev1", 00:11:21.575 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:21.575 "is_configured": true, 00:11:21.575 "data_offset": 0, 00:11:21.575 "data_size": 65536 00:11:21.575 }, 00:11:21.575 { 00:11:21.575 "name": "BaseBdev2", 00:11:21.575 "uuid": "17dff91c-94b9-4aa6-a864-00c45ed0169c", 00:11:21.575 "is_configured": true, 00:11:21.575 "data_offset": 0, 00:11:21.575 "data_size": 65536 00:11:21.575 }, 00:11:21.575 { 00:11:21.575 "name": "BaseBdev3", 00:11:21.575 "uuid": "89d5da60-8e2a-46f2-b56a-6ee650077e01", 00:11:21.575 "is_configured": true, 00:11:21.575 "data_offset": 0, 00:11:21.575 "data_size": 65536 00:11:21.575 }, 00:11:21.575 { 00:11:21.575 "name": "BaseBdev4", 00:11:21.575 "uuid": "7059f730-c186-48df-af93-d9e2076c2faf", 00:11:21.575 "is_configured": true, 00:11:21.575 "data_offset": 0, 00:11:21.575 "data_size": 65536 00:11:21.575 } 00:11:21.575 ] 00:11:21.575 }' 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.575 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.143 [2024-11-06 12:42:10.644007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.143 "name": "Existed_Raid", 00:11:22.143 "aliases": [ 00:11:22.143 "62f2e67b-b2fe-49f9-805b-5ff5f8fea120" 00:11:22.143 ], 00:11:22.143 "product_name": "Raid Volume", 00:11:22.143 "block_size": 512, 00:11:22.143 "num_blocks": 262144, 00:11:22.143 "uuid": "62f2e67b-b2fe-49f9-805b-5ff5f8fea120", 00:11:22.143 "assigned_rate_limits": { 00:11:22.143 "rw_ios_per_sec": 0, 00:11:22.143 "rw_mbytes_per_sec": 0, 00:11:22.143 "r_mbytes_per_sec": 0, 00:11:22.143 "w_mbytes_per_sec": 0 00:11:22.143 }, 00:11:22.143 "claimed": false, 00:11:22.143 "zoned": false, 00:11:22.143 "supported_io_types": { 00:11:22.143 "read": true, 00:11:22.143 "write": true, 00:11:22.143 "unmap": true, 00:11:22.143 "flush": true, 00:11:22.143 "reset": true, 00:11:22.143 "nvme_admin": false, 00:11:22.143 "nvme_io": false, 00:11:22.143 "nvme_io_md": false, 00:11:22.143 "write_zeroes": true, 00:11:22.143 "zcopy": false, 00:11:22.143 "get_zone_info": false, 00:11:22.143 "zone_management": false, 00:11:22.143 "zone_append": false, 00:11:22.143 "compare": false, 00:11:22.143 "compare_and_write": false, 00:11:22.143 "abort": false, 00:11:22.143 "seek_hole": false, 00:11:22.143 "seek_data": false, 00:11:22.143 "copy": false, 00:11:22.143 "nvme_iov_md": false 00:11:22.143 }, 00:11:22.143 "memory_domains": [ 00:11:22.143 { 00:11:22.143 "dma_device_id": "system", 00:11:22.143 "dma_device_type": 1 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.143 "dma_device_type": 2 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "system", 00:11:22.143 "dma_device_type": 1 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.143 "dma_device_type": 2 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "system", 00:11:22.143 "dma_device_type": 1 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.143 "dma_device_type": 2 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "system", 00:11:22.143 "dma_device_type": 1 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.143 "dma_device_type": 2 00:11:22.143 } 00:11:22.143 ], 00:11:22.143 "driver_specific": { 00:11:22.143 "raid": { 00:11:22.143 "uuid": "62f2e67b-b2fe-49f9-805b-5ff5f8fea120", 00:11:22.143 "strip_size_kb": 64, 00:11:22.143 "state": "online", 00:11:22.143 "raid_level": "concat", 00:11:22.143 "superblock": false, 00:11:22.143 "num_base_bdevs": 4, 00:11:22.143 "num_base_bdevs_discovered": 4, 00:11:22.143 "num_base_bdevs_operational": 4, 00:11:22.143 "base_bdevs_list": [ 00:11:22.143 { 00:11:22.143 "name": "BaseBdev1", 00:11:22.143 "uuid": "5a7ba7de-6836-45fd-84a0-7e15bf3713fc", 00:11:22.143 "is_configured": true, 00:11:22.143 "data_offset": 0, 00:11:22.143 "data_size": 65536 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "name": "BaseBdev2", 00:11:22.143 "uuid": "17dff91c-94b9-4aa6-a864-00c45ed0169c", 00:11:22.143 "is_configured": true, 00:11:22.143 "data_offset": 0, 00:11:22.143 "data_size": 65536 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "name": "BaseBdev3", 00:11:22.143 "uuid": "89d5da60-8e2a-46f2-b56a-6ee650077e01", 00:11:22.143 "is_configured": true, 00:11:22.143 "data_offset": 0, 00:11:22.143 "data_size": 65536 00:11:22.143 }, 00:11:22.143 { 00:11:22.143 "name": "BaseBdev4", 00:11:22.143 "uuid": "7059f730-c186-48df-af93-d9e2076c2faf", 00:11:22.143 "is_configured": true, 00:11:22.143 "data_offset": 0, 00:11:22.143 "data_size": 65536 00:11:22.143 } 00:11:22.143 ] 00:11:22.143 } 00:11:22.143 } 00:11:22.143 }' 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:22.143 BaseBdev2 00:11:22.143 BaseBdev3 00:11:22.143 BaseBdev4' 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.143 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.401 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.402 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.402 12:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.402 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.402 12:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.402 [2024-11-06 12:42:10.999754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.402 [2024-11-06 12:42:10.999958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.402 [2024-11-06 12:42:11.000050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.660 "name": "Existed_Raid", 00:11:22.660 "uuid": "62f2e67b-b2fe-49f9-805b-5ff5f8fea120", 00:11:22.660 "strip_size_kb": 64, 00:11:22.660 "state": "offline", 00:11:22.660 "raid_level": "concat", 00:11:22.660 "superblock": false, 00:11:22.660 "num_base_bdevs": 4, 00:11:22.660 "num_base_bdevs_discovered": 3, 00:11:22.660 "num_base_bdevs_operational": 3, 00:11:22.660 "base_bdevs_list": [ 00:11:22.660 { 00:11:22.660 "name": null, 00:11:22.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.660 "is_configured": false, 00:11:22.660 "data_offset": 0, 00:11:22.660 "data_size": 65536 00:11:22.660 }, 00:11:22.660 { 00:11:22.660 "name": "BaseBdev2", 00:11:22.660 "uuid": "17dff91c-94b9-4aa6-a864-00c45ed0169c", 00:11:22.660 "is_configured": true, 00:11:22.660 "data_offset": 0, 00:11:22.660 "data_size": 65536 00:11:22.660 }, 00:11:22.660 { 00:11:22.660 "name": "BaseBdev3", 00:11:22.660 "uuid": "89d5da60-8e2a-46f2-b56a-6ee650077e01", 00:11:22.660 "is_configured": true, 00:11:22.660 "data_offset": 0, 00:11:22.660 "data_size": 65536 00:11:22.660 }, 00:11:22.660 { 00:11:22.660 "name": "BaseBdev4", 00:11:22.660 "uuid": "7059f730-c186-48df-af93-d9e2076c2faf", 00:11:22.660 "is_configured": true, 00:11:22.660 "data_offset": 0, 00:11:22.660 "data_size": 65536 00:11:22.660 } 00:11:22.660 ] 00:11:22.660 }' 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.660 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.227 [2024-11-06 12:42:11.648763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.227 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.227 [2024-11-06 12:42:11.803005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.487 12:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.487 [2024-11-06 12:42:11.949079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:23.487 [2024-11-06 12:42:11.949163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.487 BaseBdev2 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:23.487 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.488 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.488 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.488 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.488 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.488 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.488 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 [ 00:11:23.771 { 00:11:23.771 "name": "BaseBdev2", 00:11:23.771 "aliases": [ 00:11:23.771 "aa8be2d7-0996-469e-bbfd-38e009c80ce5" 00:11:23.771 ], 00:11:23.771 "product_name": "Malloc disk", 00:11:23.771 "block_size": 512, 00:11:23.771 "num_blocks": 65536, 00:11:23.771 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:23.771 "assigned_rate_limits": { 00:11:23.771 "rw_ios_per_sec": 0, 00:11:23.771 "rw_mbytes_per_sec": 0, 00:11:23.771 "r_mbytes_per_sec": 0, 00:11:23.771 "w_mbytes_per_sec": 0 00:11:23.771 }, 00:11:23.771 "claimed": false, 00:11:23.771 "zoned": false, 00:11:23.771 "supported_io_types": { 00:11:23.771 "read": true, 00:11:23.771 "write": true, 00:11:23.771 "unmap": true, 00:11:23.771 "flush": true, 00:11:23.771 "reset": true, 00:11:23.771 "nvme_admin": false, 00:11:23.771 "nvme_io": false, 00:11:23.771 "nvme_io_md": false, 00:11:23.771 "write_zeroes": true, 00:11:23.771 "zcopy": true, 00:11:23.771 "get_zone_info": false, 00:11:23.771 "zone_management": false, 00:11:23.771 "zone_append": false, 00:11:23.771 "compare": false, 00:11:23.771 "compare_and_write": false, 00:11:23.771 "abort": true, 00:11:23.771 "seek_hole": false, 00:11:23.771 "seek_data": false, 00:11:23.771 "copy": true, 00:11:23.771 "nvme_iov_md": false 00:11:23.771 }, 00:11:23.771 "memory_domains": [ 00:11:23.771 { 00:11:23.771 "dma_device_id": "system", 00:11:23.771 "dma_device_type": 1 00:11:23.771 }, 00:11:23.771 { 00:11:23.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.771 "dma_device_type": 2 00:11:23.771 } 00:11:23.771 ], 00:11:23.771 "driver_specific": {} 00:11:23.771 } 00:11:23.771 ] 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 BaseBdev3 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.771 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.771 [ 00:11:23.771 { 00:11:23.771 "name": "BaseBdev3", 00:11:23.771 "aliases": [ 00:11:23.771 "cc693369-d7aa-41ac-a000-99fa7727a41d" 00:11:23.771 ], 00:11:23.771 "product_name": "Malloc disk", 00:11:23.771 "block_size": 512, 00:11:23.771 "num_blocks": 65536, 00:11:23.771 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:23.771 "assigned_rate_limits": { 00:11:23.772 "rw_ios_per_sec": 0, 00:11:23.772 "rw_mbytes_per_sec": 0, 00:11:23.772 "r_mbytes_per_sec": 0, 00:11:23.772 "w_mbytes_per_sec": 0 00:11:23.772 }, 00:11:23.772 "claimed": false, 00:11:23.772 "zoned": false, 00:11:23.772 "supported_io_types": { 00:11:23.772 "read": true, 00:11:23.772 "write": true, 00:11:23.772 "unmap": true, 00:11:23.772 "flush": true, 00:11:23.772 "reset": true, 00:11:23.772 "nvme_admin": false, 00:11:23.772 "nvme_io": false, 00:11:23.772 "nvme_io_md": false, 00:11:23.772 "write_zeroes": true, 00:11:23.772 "zcopy": true, 00:11:23.772 "get_zone_info": false, 00:11:23.772 "zone_management": false, 00:11:23.772 "zone_append": false, 00:11:23.772 "compare": false, 00:11:23.772 "compare_and_write": false, 00:11:23.772 "abort": true, 00:11:23.772 "seek_hole": false, 00:11:23.772 "seek_data": false, 00:11:23.772 "copy": true, 00:11:23.772 "nvme_iov_md": false 00:11:23.772 }, 00:11:23.772 "memory_domains": [ 00:11:23.772 { 00:11:23.772 "dma_device_id": "system", 00:11:23.772 "dma_device_type": 1 00:11:23.772 }, 00:11:23.772 { 00:11:23.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.772 "dma_device_type": 2 00:11:23.772 } 00:11:23.772 ], 00:11:23.772 "driver_specific": {} 00:11:23.772 } 00:11:23.772 ] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.772 BaseBdev4 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.772 [ 00:11:23.772 { 00:11:23.772 "name": "BaseBdev4", 00:11:23.772 "aliases": [ 00:11:23.772 "d35eed3e-c04e-4377-9848-2f4b2e671810" 00:11:23.772 ], 00:11:23.772 "product_name": "Malloc disk", 00:11:23.772 "block_size": 512, 00:11:23.772 "num_blocks": 65536, 00:11:23.772 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:23.772 "assigned_rate_limits": { 00:11:23.772 "rw_ios_per_sec": 0, 00:11:23.772 "rw_mbytes_per_sec": 0, 00:11:23.772 "r_mbytes_per_sec": 0, 00:11:23.772 "w_mbytes_per_sec": 0 00:11:23.772 }, 00:11:23.772 "claimed": false, 00:11:23.772 "zoned": false, 00:11:23.772 "supported_io_types": { 00:11:23.772 "read": true, 00:11:23.772 "write": true, 00:11:23.772 "unmap": true, 00:11:23.772 "flush": true, 00:11:23.772 "reset": true, 00:11:23.772 "nvme_admin": false, 00:11:23.772 "nvme_io": false, 00:11:23.772 "nvme_io_md": false, 00:11:23.772 "write_zeroes": true, 00:11:23.772 "zcopy": true, 00:11:23.772 "get_zone_info": false, 00:11:23.772 "zone_management": false, 00:11:23.772 "zone_append": false, 00:11:23.772 "compare": false, 00:11:23.772 "compare_and_write": false, 00:11:23.772 "abort": true, 00:11:23.772 "seek_hole": false, 00:11:23.772 "seek_data": false, 00:11:23.772 "copy": true, 00:11:23.772 "nvme_iov_md": false 00:11:23.772 }, 00:11:23.772 "memory_domains": [ 00:11:23.772 { 00:11:23.772 "dma_device_id": "system", 00:11:23.772 "dma_device_type": 1 00:11:23.772 }, 00:11:23.772 { 00:11:23.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.772 "dma_device_type": 2 00:11:23.772 } 00:11:23.772 ], 00:11:23.772 "driver_specific": {} 00:11:23.772 } 00:11:23.772 ] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.772 [2024-11-06 12:42:12.325924] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.772 [2024-11-06 12:42:12.326001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.772 [2024-11-06 12:42:12.326055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.772 [2024-11-06 12:42:12.328720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.772 [2024-11-06 12:42:12.328810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.772 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.773 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.773 "name": "Existed_Raid", 00:11:23.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.773 "strip_size_kb": 64, 00:11:23.773 "state": "configuring", 00:11:23.773 "raid_level": "concat", 00:11:23.773 "superblock": false, 00:11:23.773 "num_base_bdevs": 4, 00:11:23.773 "num_base_bdevs_discovered": 3, 00:11:23.773 "num_base_bdevs_operational": 4, 00:11:23.773 "base_bdevs_list": [ 00:11:23.773 { 00:11:23.773 "name": "BaseBdev1", 00:11:23.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.773 "is_configured": false, 00:11:23.773 "data_offset": 0, 00:11:23.773 "data_size": 0 00:11:23.773 }, 00:11:23.773 { 00:11:23.773 "name": "BaseBdev2", 00:11:23.773 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:23.773 "is_configured": true, 00:11:23.773 "data_offset": 0, 00:11:23.773 "data_size": 65536 00:11:23.773 }, 00:11:23.773 { 00:11:23.773 "name": "BaseBdev3", 00:11:23.773 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:23.773 "is_configured": true, 00:11:23.773 "data_offset": 0, 00:11:23.773 "data_size": 65536 00:11:23.773 }, 00:11:23.773 { 00:11:23.773 "name": "BaseBdev4", 00:11:23.773 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:23.773 "is_configured": true, 00:11:23.773 "data_offset": 0, 00:11:23.773 "data_size": 65536 00:11:23.773 } 00:11:23.773 ] 00:11:23.773 }' 00:11:23.773 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.773 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 [2024-11-06 12:42:12.858082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.339 "name": "Existed_Raid", 00:11:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.339 "strip_size_kb": 64, 00:11:24.339 "state": "configuring", 00:11:24.339 "raid_level": "concat", 00:11:24.339 "superblock": false, 00:11:24.339 "num_base_bdevs": 4, 00:11:24.339 "num_base_bdevs_discovered": 2, 00:11:24.339 "num_base_bdevs_operational": 4, 00:11:24.339 "base_bdevs_list": [ 00:11:24.339 { 00:11:24.339 "name": "BaseBdev1", 00:11:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.339 "is_configured": false, 00:11:24.339 "data_offset": 0, 00:11:24.339 "data_size": 0 00:11:24.339 }, 00:11:24.339 { 00:11:24.339 "name": null, 00:11:24.339 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:24.339 "is_configured": false, 00:11:24.339 "data_offset": 0, 00:11:24.339 "data_size": 65536 00:11:24.339 }, 00:11:24.339 { 00:11:24.339 "name": "BaseBdev3", 00:11:24.339 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:24.339 "is_configured": true, 00:11:24.339 "data_offset": 0, 00:11:24.339 "data_size": 65536 00:11:24.339 }, 00:11:24.339 { 00:11:24.339 "name": "BaseBdev4", 00:11:24.339 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:24.339 "is_configured": true, 00:11:24.339 "data_offset": 0, 00:11:24.339 "data_size": 65536 00:11:24.339 } 00:11:24.339 ] 00:11:24.339 }' 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.339 12:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.904 [2024-11-06 12:42:13.505307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.904 BaseBdev1 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.904 [ 00:11:24.904 { 00:11:24.904 "name": "BaseBdev1", 00:11:24.904 "aliases": [ 00:11:24.904 "9f9904c2-1f6b-4822-bc61-3d85b91ce158" 00:11:24.904 ], 00:11:24.904 "product_name": "Malloc disk", 00:11:24.904 "block_size": 512, 00:11:24.904 "num_blocks": 65536, 00:11:24.904 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:24.904 "assigned_rate_limits": { 00:11:24.904 "rw_ios_per_sec": 0, 00:11:24.904 "rw_mbytes_per_sec": 0, 00:11:24.904 "r_mbytes_per_sec": 0, 00:11:24.904 "w_mbytes_per_sec": 0 00:11:24.904 }, 00:11:24.904 "claimed": true, 00:11:24.904 "claim_type": "exclusive_write", 00:11:24.904 "zoned": false, 00:11:24.904 "supported_io_types": { 00:11:24.904 "read": true, 00:11:24.904 "write": true, 00:11:24.904 "unmap": true, 00:11:24.904 "flush": true, 00:11:24.904 "reset": true, 00:11:24.904 "nvme_admin": false, 00:11:24.904 "nvme_io": false, 00:11:24.904 "nvme_io_md": false, 00:11:24.904 "write_zeroes": true, 00:11:24.904 "zcopy": true, 00:11:24.904 "get_zone_info": false, 00:11:24.904 "zone_management": false, 00:11:24.904 "zone_append": false, 00:11:24.904 "compare": false, 00:11:24.904 "compare_and_write": false, 00:11:24.904 "abort": true, 00:11:24.904 "seek_hole": false, 00:11:24.904 "seek_data": false, 00:11:24.904 "copy": true, 00:11:24.904 "nvme_iov_md": false 00:11:24.904 }, 00:11:24.904 "memory_domains": [ 00:11:24.904 { 00:11:24.904 "dma_device_id": "system", 00:11:24.904 "dma_device_type": 1 00:11:24.904 }, 00:11:24.904 { 00:11:24.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.904 "dma_device_type": 2 00:11:24.904 } 00:11:24.904 ], 00:11:24.904 "driver_specific": {} 00:11:24.904 } 00:11:24.904 ] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.904 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.162 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.162 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.162 "name": "Existed_Raid", 00:11:25.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.162 "strip_size_kb": 64, 00:11:25.162 "state": "configuring", 00:11:25.162 "raid_level": "concat", 00:11:25.162 "superblock": false, 00:11:25.162 "num_base_bdevs": 4, 00:11:25.162 "num_base_bdevs_discovered": 3, 00:11:25.162 "num_base_bdevs_operational": 4, 00:11:25.162 "base_bdevs_list": [ 00:11:25.162 { 00:11:25.162 "name": "BaseBdev1", 00:11:25.162 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:25.162 "is_configured": true, 00:11:25.162 "data_offset": 0, 00:11:25.162 "data_size": 65536 00:11:25.162 }, 00:11:25.162 { 00:11:25.162 "name": null, 00:11:25.162 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:25.162 "is_configured": false, 00:11:25.162 "data_offset": 0, 00:11:25.162 "data_size": 65536 00:11:25.162 }, 00:11:25.162 { 00:11:25.162 "name": "BaseBdev3", 00:11:25.162 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:25.162 "is_configured": true, 00:11:25.162 "data_offset": 0, 00:11:25.162 "data_size": 65536 00:11:25.162 }, 00:11:25.162 { 00:11:25.162 "name": "BaseBdev4", 00:11:25.162 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:25.162 "is_configured": true, 00:11:25.162 "data_offset": 0, 00:11:25.162 "data_size": 65536 00:11:25.162 } 00:11:25.162 ] 00:11:25.162 }' 00:11:25.162 12:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.162 12:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.421 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.421 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.421 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.421 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.679 [2024-11-06 12:42:14.133635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.679 "name": "Existed_Raid", 00:11:25.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.679 "strip_size_kb": 64, 00:11:25.679 "state": "configuring", 00:11:25.679 "raid_level": "concat", 00:11:25.679 "superblock": false, 00:11:25.679 "num_base_bdevs": 4, 00:11:25.679 "num_base_bdevs_discovered": 2, 00:11:25.679 "num_base_bdevs_operational": 4, 00:11:25.679 "base_bdevs_list": [ 00:11:25.679 { 00:11:25.679 "name": "BaseBdev1", 00:11:25.679 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:25.679 "is_configured": true, 00:11:25.679 "data_offset": 0, 00:11:25.679 "data_size": 65536 00:11:25.679 }, 00:11:25.679 { 00:11:25.679 "name": null, 00:11:25.679 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:25.679 "is_configured": false, 00:11:25.679 "data_offset": 0, 00:11:25.679 "data_size": 65536 00:11:25.679 }, 00:11:25.679 { 00:11:25.679 "name": null, 00:11:25.679 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:25.679 "is_configured": false, 00:11:25.679 "data_offset": 0, 00:11:25.679 "data_size": 65536 00:11:25.679 }, 00:11:25.679 { 00:11:25.679 "name": "BaseBdev4", 00:11:25.679 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:25.679 "is_configured": true, 00:11:25.679 "data_offset": 0, 00:11:25.679 "data_size": 65536 00:11:25.679 } 00:11:25.679 ] 00:11:25.679 }' 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.679 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.248 [2024-11-06 12:42:14.685761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.248 "name": "Existed_Raid", 00:11:26.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.248 "strip_size_kb": 64, 00:11:26.248 "state": "configuring", 00:11:26.248 "raid_level": "concat", 00:11:26.248 "superblock": false, 00:11:26.248 "num_base_bdevs": 4, 00:11:26.248 "num_base_bdevs_discovered": 3, 00:11:26.248 "num_base_bdevs_operational": 4, 00:11:26.248 "base_bdevs_list": [ 00:11:26.248 { 00:11:26.248 "name": "BaseBdev1", 00:11:26.248 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:26.248 "is_configured": true, 00:11:26.248 "data_offset": 0, 00:11:26.248 "data_size": 65536 00:11:26.248 }, 00:11:26.248 { 00:11:26.248 "name": null, 00:11:26.248 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:26.248 "is_configured": false, 00:11:26.248 "data_offset": 0, 00:11:26.248 "data_size": 65536 00:11:26.248 }, 00:11:26.248 { 00:11:26.248 "name": "BaseBdev3", 00:11:26.248 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:26.248 "is_configured": true, 00:11:26.248 "data_offset": 0, 00:11:26.248 "data_size": 65536 00:11:26.248 }, 00:11:26.248 { 00:11:26.248 "name": "BaseBdev4", 00:11:26.248 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:26.248 "is_configured": true, 00:11:26.248 "data_offset": 0, 00:11:26.248 "data_size": 65536 00:11:26.248 } 00:11:26.248 ] 00:11:26.248 }' 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.248 12:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 [2024-11-06 12:42:15.258234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.815 "name": "Existed_Raid", 00:11:26.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.815 "strip_size_kb": 64, 00:11:26.815 "state": "configuring", 00:11:26.815 "raid_level": "concat", 00:11:26.815 "superblock": false, 00:11:26.815 "num_base_bdevs": 4, 00:11:26.815 "num_base_bdevs_discovered": 2, 00:11:26.815 "num_base_bdevs_operational": 4, 00:11:26.815 "base_bdevs_list": [ 00:11:26.815 { 00:11:26.815 "name": null, 00:11:26.815 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:26.815 "is_configured": false, 00:11:26.815 "data_offset": 0, 00:11:26.815 "data_size": 65536 00:11:26.815 }, 00:11:26.815 { 00:11:26.815 "name": null, 00:11:26.815 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:26.815 "is_configured": false, 00:11:26.815 "data_offset": 0, 00:11:26.815 "data_size": 65536 00:11:26.815 }, 00:11:26.815 { 00:11:26.815 "name": "BaseBdev3", 00:11:26.815 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:26.815 "is_configured": true, 00:11:26.815 "data_offset": 0, 00:11:26.815 "data_size": 65536 00:11:26.815 }, 00:11:26.815 { 00:11:26.815 "name": "BaseBdev4", 00:11:26.815 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:26.815 "is_configured": true, 00:11:26.815 "data_offset": 0, 00:11:26.815 "data_size": 65536 00:11:26.815 } 00:11:26.815 ] 00:11:26.815 }' 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.815 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.382 [2024-11-06 12:42:15.922043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.382 "name": "Existed_Raid", 00:11:27.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.382 "strip_size_kb": 64, 00:11:27.382 "state": "configuring", 00:11:27.382 "raid_level": "concat", 00:11:27.382 "superblock": false, 00:11:27.382 "num_base_bdevs": 4, 00:11:27.382 "num_base_bdevs_discovered": 3, 00:11:27.382 "num_base_bdevs_operational": 4, 00:11:27.382 "base_bdevs_list": [ 00:11:27.382 { 00:11:27.382 "name": null, 00:11:27.382 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:27.382 "is_configured": false, 00:11:27.382 "data_offset": 0, 00:11:27.382 "data_size": 65536 00:11:27.382 }, 00:11:27.382 { 00:11:27.382 "name": "BaseBdev2", 00:11:27.382 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:27.382 "is_configured": true, 00:11:27.382 "data_offset": 0, 00:11:27.382 "data_size": 65536 00:11:27.382 }, 00:11:27.382 { 00:11:27.382 "name": "BaseBdev3", 00:11:27.382 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:27.382 "is_configured": true, 00:11:27.382 "data_offset": 0, 00:11:27.382 "data_size": 65536 00:11:27.382 }, 00:11:27.382 { 00:11:27.382 "name": "BaseBdev4", 00:11:27.382 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:27.382 "is_configured": true, 00:11:27.382 "data_offset": 0, 00:11:27.382 "data_size": 65536 00:11:27.382 } 00:11:27.382 ] 00:11:27.382 }' 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.382 12:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f9904c2-1f6b-4822-bc61-3d85b91ce158 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.948 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.207 [2024-11-06 12:42:16.603758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:28.207 [2024-11-06 12:42:16.603834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.207 [2024-11-06 12:42:16.603847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:28.207 [2024-11-06 12:42:16.604248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:28.207 [2024-11-06 12:42:16.604442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.207 [2024-11-06 12:42:16.604464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:28.207 [2024-11-06 12:42:16.604812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.207 NewBaseBdev 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.207 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.207 [ 00:11:28.207 { 00:11:28.207 "name": "NewBaseBdev", 00:11:28.207 "aliases": [ 00:11:28.207 "9f9904c2-1f6b-4822-bc61-3d85b91ce158" 00:11:28.207 ], 00:11:28.207 "product_name": "Malloc disk", 00:11:28.207 "block_size": 512, 00:11:28.207 "num_blocks": 65536, 00:11:28.207 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:28.207 "assigned_rate_limits": { 00:11:28.207 "rw_ios_per_sec": 0, 00:11:28.207 "rw_mbytes_per_sec": 0, 00:11:28.207 "r_mbytes_per_sec": 0, 00:11:28.207 "w_mbytes_per_sec": 0 00:11:28.207 }, 00:11:28.207 "claimed": true, 00:11:28.207 "claim_type": "exclusive_write", 00:11:28.207 "zoned": false, 00:11:28.207 "supported_io_types": { 00:11:28.207 "read": true, 00:11:28.207 "write": true, 00:11:28.207 "unmap": true, 00:11:28.207 "flush": true, 00:11:28.207 "reset": true, 00:11:28.207 "nvme_admin": false, 00:11:28.207 "nvme_io": false, 00:11:28.207 "nvme_io_md": false, 00:11:28.207 "write_zeroes": true, 00:11:28.207 "zcopy": true, 00:11:28.207 "get_zone_info": false, 00:11:28.207 "zone_management": false, 00:11:28.207 "zone_append": false, 00:11:28.207 "compare": false, 00:11:28.207 "compare_and_write": false, 00:11:28.207 "abort": true, 00:11:28.207 "seek_hole": false, 00:11:28.207 "seek_data": false, 00:11:28.207 "copy": true, 00:11:28.207 "nvme_iov_md": false 00:11:28.208 }, 00:11:28.208 "memory_domains": [ 00:11:28.208 { 00:11:28.208 "dma_device_id": "system", 00:11:28.208 "dma_device_type": 1 00:11:28.208 }, 00:11:28.208 { 00:11:28.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.208 "dma_device_type": 2 00:11:28.208 } 00:11:28.208 ], 00:11:28.208 "driver_specific": {} 00:11:28.208 } 00:11:28.208 ] 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.208 "name": "Existed_Raid", 00:11:28.208 "uuid": "5afd3b92-cd76-469f-8e8b-3c5a7b6715dc", 00:11:28.208 "strip_size_kb": 64, 00:11:28.208 "state": "online", 00:11:28.208 "raid_level": "concat", 00:11:28.208 "superblock": false, 00:11:28.208 "num_base_bdevs": 4, 00:11:28.208 "num_base_bdevs_discovered": 4, 00:11:28.208 "num_base_bdevs_operational": 4, 00:11:28.208 "base_bdevs_list": [ 00:11:28.208 { 00:11:28.208 "name": "NewBaseBdev", 00:11:28.208 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:28.208 "is_configured": true, 00:11:28.208 "data_offset": 0, 00:11:28.208 "data_size": 65536 00:11:28.208 }, 00:11:28.208 { 00:11:28.208 "name": "BaseBdev2", 00:11:28.208 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:28.208 "is_configured": true, 00:11:28.208 "data_offset": 0, 00:11:28.208 "data_size": 65536 00:11:28.208 }, 00:11:28.208 { 00:11:28.208 "name": "BaseBdev3", 00:11:28.208 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:28.208 "is_configured": true, 00:11:28.208 "data_offset": 0, 00:11:28.208 "data_size": 65536 00:11:28.208 }, 00:11:28.208 { 00:11:28.208 "name": "BaseBdev4", 00:11:28.208 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:28.208 "is_configured": true, 00:11:28.208 "data_offset": 0, 00:11:28.208 "data_size": 65536 00:11:28.208 } 00:11:28.208 ] 00:11:28.208 }' 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.208 12:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.775 [2024-11-06 12:42:17.168491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.775 "name": "Existed_Raid", 00:11:28.775 "aliases": [ 00:11:28.775 "5afd3b92-cd76-469f-8e8b-3c5a7b6715dc" 00:11:28.775 ], 00:11:28.775 "product_name": "Raid Volume", 00:11:28.775 "block_size": 512, 00:11:28.775 "num_blocks": 262144, 00:11:28.775 "uuid": "5afd3b92-cd76-469f-8e8b-3c5a7b6715dc", 00:11:28.775 "assigned_rate_limits": { 00:11:28.775 "rw_ios_per_sec": 0, 00:11:28.775 "rw_mbytes_per_sec": 0, 00:11:28.775 "r_mbytes_per_sec": 0, 00:11:28.775 "w_mbytes_per_sec": 0 00:11:28.775 }, 00:11:28.775 "claimed": false, 00:11:28.775 "zoned": false, 00:11:28.775 "supported_io_types": { 00:11:28.775 "read": true, 00:11:28.775 "write": true, 00:11:28.775 "unmap": true, 00:11:28.775 "flush": true, 00:11:28.775 "reset": true, 00:11:28.775 "nvme_admin": false, 00:11:28.775 "nvme_io": false, 00:11:28.775 "nvme_io_md": false, 00:11:28.775 "write_zeroes": true, 00:11:28.775 "zcopy": false, 00:11:28.775 "get_zone_info": false, 00:11:28.775 "zone_management": false, 00:11:28.775 "zone_append": false, 00:11:28.775 "compare": false, 00:11:28.775 "compare_and_write": false, 00:11:28.775 "abort": false, 00:11:28.775 "seek_hole": false, 00:11:28.775 "seek_data": false, 00:11:28.775 "copy": false, 00:11:28.775 "nvme_iov_md": false 00:11:28.775 }, 00:11:28.775 "memory_domains": [ 00:11:28.775 { 00:11:28.775 "dma_device_id": "system", 00:11:28.775 "dma_device_type": 1 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.775 "dma_device_type": 2 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "system", 00:11:28.775 "dma_device_type": 1 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.775 "dma_device_type": 2 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "system", 00:11:28.775 "dma_device_type": 1 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.775 "dma_device_type": 2 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "system", 00:11:28.775 "dma_device_type": 1 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.775 "dma_device_type": 2 00:11:28.775 } 00:11:28.775 ], 00:11:28.775 "driver_specific": { 00:11:28.775 "raid": { 00:11:28.775 "uuid": "5afd3b92-cd76-469f-8e8b-3c5a7b6715dc", 00:11:28.775 "strip_size_kb": 64, 00:11:28.775 "state": "online", 00:11:28.775 "raid_level": "concat", 00:11:28.775 "superblock": false, 00:11:28.775 "num_base_bdevs": 4, 00:11:28.775 "num_base_bdevs_discovered": 4, 00:11:28.775 "num_base_bdevs_operational": 4, 00:11:28.775 "base_bdevs_list": [ 00:11:28.775 { 00:11:28.775 "name": "NewBaseBdev", 00:11:28.775 "uuid": "9f9904c2-1f6b-4822-bc61-3d85b91ce158", 00:11:28.775 "is_configured": true, 00:11:28.775 "data_offset": 0, 00:11:28.775 "data_size": 65536 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "name": "BaseBdev2", 00:11:28.775 "uuid": "aa8be2d7-0996-469e-bbfd-38e009c80ce5", 00:11:28.775 "is_configured": true, 00:11:28.775 "data_offset": 0, 00:11:28.775 "data_size": 65536 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "name": "BaseBdev3", 00:11:28.775 "uuid": "cc693369-d7aa-41ac-a000-99fa7727a41d", 00:11:28.775 "is_configured": true, 00:11:28.775 "data_offset": 0, 00:11:28.775 "data_size": 65536 00:11:28.775 }, 00:11:28.775 { 00:11:28.775 "name": "BaseBdev4", 00:11:28.775 "uuid": "d35eed3e-c04e-4377-9848-2f4b2e671810", 00:11:28.775 "is_configured": true, 00:11:28.775 "data_offset": 0, 00:11:28.775 "data_size": 65536 00:11:28.775 } 00:11:28.775 ] 00:11:28.775 } 00:11:28.775 } 00:11:28.775 }' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:28.775 BaseBdev2 00:11:28.775 BaseBdev3 00:11:28.775 BaseBdev4' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.775 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:28.776 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.776 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.776 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.037 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.037 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.037 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.037 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.038 [2024-11-06 12:42:17.536078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.038 [2024-11-06 12:42:17.536280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.038 [2024-11-06 12:42:17.536437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.038 [2024-11-06 12:42:17.536544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.038 [2024-11-06 12:42:17.536563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71418 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71418 ']' 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71418 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71418 00:11:29.038 killing process with pid 71418 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71418' 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71418 00:11:29.038 [2024-11-06 12:42:17.577002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.038 12:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71418 00:11:29.604 [2024-11-06 12:42:17.970763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.538 ************************************ 00:11:30.538 END TEST raid_state_function_test 00:11:30.538 ************************************ 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:30.538 00:11:30.538 real 0m13.024s 00:11:30.538 user 0m21.463s 00:11:30.538 sys 0m1.845s 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.538 12:42:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:30.538 12:42:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:30.538 12:42:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.538 12:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.538 ************************************ 00:11:30.538 START TEST raid_state_function_test_sb 00:11:30.538 ************************************ 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72100 00:11:30.538 Process raid pid: 72100 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72100' 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72100 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72100 ']' 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:30.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:30.538 12:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.796 [2024-11-06 12:42:19.273767] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:30.796 [2024-11-06 12:42:19.273938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.089 [2024-11-06 12:42:19.452319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.089 [2024-11-06 12:42:19.602960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.348 [2024-11-06 12:42:19.831895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.348 [2024-11-06 12:42:19.831948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.913 [2024-11-06 12:42:20.302068] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.913 [2024-11-06 12:42:20.302136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.913 [2024-11-06 12:42:20.302155] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.913 [2024-11-06 12:42:20.302173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.913 [2024-11-06 12:42:20.302184] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.913 [2024-11-06 12:42:20.302213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.913 [2024-11-06 12:42:20.302225] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.913 [2024-11-06 12:42:20.302241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.913 "name": "Existed_Raid", 00:11:31.913 "uuid": "adfb2ff6-45ed-4b83-9544-5d0e96cf5a07", 00:11:31.913 "strip_size_kb": 64, 00:11:31.913 "state": "configuring", 00:11:31.913 "raid_level": "concat", 00:11:31.913 "superblock": true, 00:11:31.913 "num_base_bdevs": 4, 00:11:31.913 "num_base_bdevs_discovered": 0, 00:11:31.913 "num_base_bdevs_operational": 4, 00:11:31.913 "base_bdevs_list": [ 00:11:31.913 { 00:11:31.913 "name": "BaseBdev1", 00:11:31.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.913 "is_configured": false, 00:11:31.913 "data_offset": 0, 00:11:31.913 "data_size": 0 00:11:31.913 }, 00:11:31.913 { 00:11:31.913 "name": "BaseBdev2", 00:11:31.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.913 "is_configured": false, 00:11:31.913 "data_offset": 0, 00:11:31.913 "data_size": 0 00:11:31.913 }, 00:11:31.913 { 00:11:31.913 "name": "BaseBdev3", 00:11:31.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.913 "is_configured": false, 00:11:31.913 "data_offset": 0, 00:11:31.913 "data_size": 0 00:11:31.913 }, 00:11:31.913 { 00:11:31.913 "name": "BaseBdev4", 00:11:31.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.913 "is_configured": false, 00:11:31.913 "data_offset": 0, 00:11:31.913 "data_size": 0 00:11:31.913 } 00:11:31.913 ] 00:11:31.913 }' 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.913 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.479 [2024-11-06 12:42:20.834155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.479 [2024-11-06 12:42:20.834239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.479 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.479 [2024-11-06 12:42:20.846136] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.479 [2024-11-06 12:42:20.846361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.479 [2024-11-06 12:42:20.846391] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.480 [2024-11-06 12:42:20.846411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.480 [2024-11-06 12:42:20.846422] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.480 [2024-11-06 12:42:20.846438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.480 [2024-11-06 12:42:20.846448] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.480 [2024-11-06 12:42:20.846463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.480 [2024-11-06 12:42:20.895487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.480 BaseBdev1 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.480 [ 00:11:32.480 { 00:11:32.480 "name": "BaseBdev1", 00:11:32.480 "aliases": [ 00:11:32.480 "8c473d84-5f34-409f-83f0-2b5b6bf202c9" 00:11:32.480 ], 00:11:32.480 "product_name": "Malloc disk", 00:11:32.480 "block_size": 512, 00:11:32.480 "num_blocks": 65536, 00:11:32.480 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:32.480 "assigned_rate_limits": { 00:11:32.480 "rw_ios_per_sec": 0, 00:11:32.480 "rw_mbytes_per_sec": 0, 00:11:32.480 "r_mbytes_per_sec": 0, 00:11:32.480 "w_mbytes_per_sec": 0 00:11:32.480 }, 00:11:32.480 "claimed": true, 00:11:32.480 "claim_type": "exclusive_write", 00:11:32.480 "zoned": false, 00:11:32.480 "supported_io_types": { 00:11:32.480 "read": true, 00:11:32.480 "write": true, 00:11:32.480 "unmap": true, 00:11:32.480 "flush": true, 00:11:32.480 "reset": true, 00:11:32.480 "nvme_admin": false, 00:11:32.480 "nvme_io": false, 00:11:32.480 "nvme_io_md": false, 00:11:32.480 "write_zeroes": true, 00:11:32.480 "zcopy": true, 00:11:32.480 "get_zone_info": false, 00:11:32.480 "zone_management": false, 00:11:32.480 "zone_append": false, 00:11:32.480 "compare": false, 00:11:32.480 "compare_and_write": false, 00:11:32.480 "abort": true, 00:11:32.480 "seek_hole": false, 00:11:32.480 "seek_data": false, 00:11:32.480 "copy": true, 00:11:32.480 "nvme_iov_md": false 00:11:32.480 }, 00:11:32.480 "memory_domains": [ 00:11:32.480 { 00:11:32.480 "dma_device_id": "system", 00:11:32.480 "dma_device_type": 1 00:11:32.480 }, 00:11:32.480 { 00:11:32.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.480 "dma_device_type": 2 00:11:32.480 } 00:11:32.480 ], 00:11:32.480 "driver_specific": {} 00:11:32.480 } 00:11:32.480 ] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.480 "name": "Existed_Raid", 00:11:32.480 "uuid": "33507263-9142-4fe3-839b-eb45eecd2e7c", 00:11:32.480 "strip_size_kb": 64, 00:11:32.480 "state": "configuring", 00:11:32.480 "raid_level": "concat", 00:11:32.480 "superblock": true, 00:11:32.480 "num_base_bdevs": 4, 00:11:32.480 "num_base_bdevs_discovered": 1, 00:11:32.480 "num_base_bdevs_operational": 4, 00:11:32.480 "base_bdevs_list": [ 00:11:32.480 { 00:11:32.480 "name": "BaseBdev1", 00:11:32.480 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:32.480 "is_configured": true, 00:11:32.480 "data_offset": 2048, 00:11:32.480 "data_size": 63488 00:11:32.480 }, 00:11:32.480 { 00:11:32.480 "name": "BaseBdev2", 00:11:32.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.480 "is_configured": false, 00:11:32.480 "data_offset": 0, 00:11:32.480 "data_size": 0 00:11:32.480 }, 00:11:32.480 { 00:11:32.480 "name": "BaseBdev3", 00:11:32.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.480 "is_configured": false, 00:11:32.480 "data_offset": 0, 00:11:32.480 "data_size": 0 00:11:32.480 }, 00:11:32.480 { 00:11:32.480 "name": "BaseBdev4", 00:11:32.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.480 "is_configured": false, 00:11:32.480 "data_offset": 0, 00:11:32.480 "data_size": 0 00:11:32.480 } 00:11:32.480 ] 00:11:32.480 }' 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.480 12:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.046 [2024-11-06 12:42:21.427745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.046 [2024-11-06 12:42:21.428081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.046 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.046 [2024-11-06 12:42:21.435781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.046 [2024-11-06 12:42:21.438817] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.046 [2024-11-06 12:42:21.439085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.046 [2024-11-06 12:42:21.439120] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.046 [2024-11-06 12:42:21.439145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.047 [2024-11-06 12:42:21.439160] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.047 [2024-11-06 12:42:21.439178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.047 "name": "Existed_Raid", 00:11:33.047 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:33.047 "strip_size_kb": 64, 00:11:33.047 "state": "configuring", 00:11:33.047 "raid_level": "concat", 00:11:33.047 "superblock": true, 00:11:33.047 "num_base_bdevs": 4, 00:11:33.047 "num_base_bdevs_discovered": 1, 00:11:33.047 "num_base_bdevs_operational": 4, 00:11:33.047 "base_bdevs_list": [ 00:11:33.047 { 00:11:33.047 "name": "BaseBdev1", 00:11:33.047 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:33.047 "is_configured": true, 00:11:33.047 "data_offset": 2048, 00:11:33.047 "data_size": 63488 00:11:33.047 }, 00:11:33.047 { 00:11:33.047 "name": "BaseBdev2", 00:11:33.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.047 "is_configured": false, 00:11:33.047 "data_offset": 0, 00:11:33.047 "data_size": 0 00:11:33.047 }, 00:11:33.047 { 00:11:33.047 "name": "BaseBdev3", 00:11:33.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.047 "is_configured": false, 00:11:33.047 "data_offset": 0, 00:11:33.047 "data_size": 0 00:11:33.047 }, 00:11:33.047 { 00:11:33.047 "name": "BaseBdev4", 00:11:33.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.047 "is_configured": false, 00:11:33.047 "data_offset": 0, 00:11:33.047 "data_size": 0 00:11:33.047 } 00:11:33.047 ] 00:11:33.047 }' 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.047 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 [2024-11-06 12:42:21.998664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.613 BaseBdev2 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:33.613 12:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 [ 00:11:33.613 { 00:11:33.613 "name": "BaseBdev2", 00:11:33.613 "aliases": [ 00:11:33.613 "b6c522e7-1332-4392-b648-4294ed745e83" 00:11:33.613 ], 00:11:33.613 "product_name": "Malloc disk", 00:11:33.613 "block_size": 512, 00:11:33.613 "num_blocks": 65536, 00:11:33.613 "uuid": "b6c522e7-1332-4392-b648-4294ed745e83", 00:11:33.613 "assigned_rate_limits": { 00:11:33.613 "rw_ios_per_sec": 0, 00:11:33.613 "rw_mbytes_per_sec": 0, 00:11:33.613 "r_mbytes_per_sec": 0, 00:11:33.613 "w_mbytes_per_sec": 0 00:11:33.613 }, 00:11:33.613 "claimed": true, 00:11:33.613 "claim_type": "exclusive_write", 00:11:33.613 "zoned": false, 00:11:33.613 "supported_io_types": { 00:11:33.613 "read": true, 00:11:33.613 "write": true, 00:11:33.613 "unmap": true, 00:11:33.613 "flush": true, 00:11:33.613 "reset": true, 00:11:33.613 "nvme_admin": false, 00:11:33.613 "nvme_io": false, 00:11:33.613 "nvme_io_md": false, 00:11:33.613 "write_zeroes": true, 00:11:33.613 "zcopy": true, 00:11:33.613 "get_zone_info": false, 00:11:33.613 "zone_management": false, 00:11:33.613 "zone_append": false, 00:11:33.613 "compare": false, 00:11:33.613 "compare_and_write": false, 00:11:33.613 "abort": true, 00:11:33.613 "seek_hole": false, 00:11:33.613 "seek_data": false, 00:11:33.613 "copy": true, 00:11:33.613 "nvme_iov_md": false 00:11:33.613 }, 00:11:33.613 "memory_domains": [ 00:11:33.613 { 00:11:33.613 "dma_device_id": "system", 00:11:33.613 "dma_device_type": 1 00:11:33.613 }, 00:11:33.613 { 00:11:33.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.613 "dma_device_type": 2 00:11:33.613 } 00:11:33.613 ], 00:11:33.613 "driver_specific": {} 00:11:33.613 } 00:11:33.613 ] 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.613 "name": "Existed_Raid", 00:11:33.613 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:33.613 "strip_size_kb": 64, 00:11:33.613 "state": "configuring", 00:11:33.613 "raid_level": "concat", 00:11:33.613 "superblock": true, 00:11:33.613 "num_base_bdevs": 4, 00:11:33.613 "num_base_bdevs_discovered": 2, 00:11:33.613 "num_base_bdevs_operational": 4, 00:11:33.613 "base_bdevs_list": [ 00:11:33.613 { 00:11:33.613 "name": "BaseBdev1", 00:11:33.613 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:33.613 "is_configured": true, 00:11:33.613 "data_offset": 2048, 00:11:33.613 "data_size": 63488 00:11:33.613 }, 00:11:33.613 { 00:11:33.613 "name": "BaseBdev2", 00:11:33.613 "uuid": "b6c522e7-1332-4392-b648-4294ed745e83", 00:11:33.613 "is_configured": true, 00:11:33.613 "data_offset": 2048, 00:11:33.613 "data_size": 63488 00:11:33.613 }, 00:11:33.613 { 00:11:33.613 "name": "BaseBdev3", 00:11:33.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.613 "is_configured": false, 00:11:33.613 "data_offset": 0, 00:11:33.613 "data_size": 0 00:11:33.613 }, 00:11:33.613 { 00:11:33.613 "name": "BaseBdev4", 00:11:33.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.613 "is_configured": false, 00:11:33.613 "data_offset": 0, 00:11:33.613 "data_size": 0 00:11:33.613 } 00:11:33.613 ] 00:11:33.613 }' 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.613 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.180 [2024-11-06 12:42:22.593107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.180 BaseBdev3 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.180 [ 00:11:34.180 { 00:11:34.180 "name": "BaseBdev3", 00:11:34.180 "aliases": [ 00:11:34.180 "2ef4da32-e4c2-4ee8-b839-d66a6ac80f9e" 00:11:34.180 ], 00:11:34.180 "product_name": "Malloc disk", 00:11:34.180 "block_size": 512, 00:11:34.180 "num_blocks": 65536, 00:11:34.180 "uuid": "2ef4da32-e4c2-4ee8-b839-d66a6ac80f9e", 00:11:34.180 "assigned_rate_limits": { 00:11:34.180 "rw_ios_per_sec": 0, 00:11:34.180 "rw_mbytes_per_sec": 0, 00:11:34.180 "r_mbytes_per_sec": 0, 00:11:34.180 "w_mbytes_per_sec": 0 00:11:34.180 }, 00:11:34.180 "claimed": true, 00:11:34.180 "claim_type": "exclusive_write", 00:11:34.180 "zoned": false, 00:11:34.180 "supported_io_types": { 00:11:34.180 "read": true, 00:11:34.180 "write": true, 00:11:34.180 "unmap": true, 00:11:34.180 "flush": true, 00:11:34.180 "reset": true, 00:11:34.180 "nvme_admin": false, 00:11:34.180 "nvme_io": false, 00:11:34.180 "nvme_io_md": false, 00:11:34.180 "write_zeroes": true, 00:11:34.180 "zcopy": true, 00:11:34.180 "get_zone_info": false, 00:11:34.180 "zone_management": false, 00:11:34.180 "zone_append": false, 00:11:34.180 "compare": false, 00:11:34.180 "compare_and_write": false, 00:11:34.180 "abort": true, 00:11:34.180 "seek_hole": false, 00:11:34.180 "seek_data": false, 00:11:34.180 "copy": true, 00:11:34.180 "nvme_iov_md": false 00:11:34.180 }, 00:11:34.180 "memory_domains": [ 00:11:34.180 { 00:11:34.180 "dma_device_id": "system", 00:11:34.180 "dma_device_type": 1 00:11:34.180 }, 00:11:34.180 { 00:11:34.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.180 "dma_device_type": 2 00:11:34.180 } 00:11:34.180 ], 00:11:34.180 "driver_specific": {} 00:11:34.180 } 00:11:34.180 ] 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.180 "name": "Existed_Raid", 00:11:34.180 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:34.180 "strip_size_kb": 64, 00:11:34.180 "state": "configuring", 00:11:34.180 "raid_level": "concat", 00:11:34.180 "superblock": true, 00:11:34.180 "num_base_bdevs": 4, 00:11:34.180 "num_base_bdevs_discovered": 3, 00:11:34.180 "num_base_bdevs_operational": 4, 00:11:34.180 "base_bdevs_list": [ 00:11:34.180 { 00:11:34.180 "name": "BaseBdev1", 00:11:34.180 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:34.180 "is_configured": true, 00:11:34.180 "data_offset": 2048, 00:11:34.180 "data_size": 63488 00:11:34.180 }, 00:11:34.180 { 00:11:34.180 "name": "BaseBdev2", 00:11:34.180 "uuid": "b6c522e7-1332-4392-b648-4294ed745e83", 00:11:34.180 "is_configured": true, 00:11:34.180 "data_offset": 2048, 00:11:34.180 "data_size": 63488 00:11:34.180 }, 00:11:34.180 { 00:11:34.180 "name": "BaseBdev3", 00:11:34.180 "uuid": "2ef4da32-e4c2-4ee8-b839-d66a6ac80f9e", 00:11:34.180 "is_configured": true, 00:11:34.180 "data_offset": 2048, 00:11:34.180 "data_size": 63488 00:11:34.180 }, 00:11:34.180 { 00:11:34.180 "name": "BaseBdev4", 00:11:34.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.180 "is_configured": false, 00:11:34.180 "data_offset": 0, 00:11:34.180 "data_size": 0 00:11:34.180 } 00:11:34.180 ] 00:11:34.180 }' 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.180 12:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.748 [2024-11-06 12:42:23.172299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.748 [2024-11-06 12:42:23.172631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.748 [2024-11-06 12:42:23.172652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:34.748 BaseBdev4 00:11:34.748 [2024-11-06 12:42:23.172988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.748 [2024-11-06 12:42:23.173227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.748 [2024-11-06 12:42:23.173257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:34.748 [2024-11-06 12:42:23.173444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.748 [ 00:11:34.748 { 00:11:34.748 "name": "BaseBdev4", 00:11:34.748 "aliases": [ 00:11:34.748 "e097438d-ba8c-49e2-b901-82767810202e" 00:11:34.748 ], 00:11:34.748 "product_name": "Malloc disk", 00:11:34.748 "block_size": 512, 00:11:34.748 "num_blocks": 65536, 00:11:34.748 "uuid": "e097438d-ba8c-49e2-b901-82767810202e", 00:11:34.748 "assigned_rate_limits": { 00:11:34.748 "rw_ios_per_sec": 0, 00:11:34.748 "rw_mbytes_per_sec": 0, 00:11:34.748 "r_mbytes_per_sec": 0, 00:11:34.748 "w_mbytes_per_sec": 0 00:11:34.748 }, 00:11:34.748 "claimed": true, 00:11:34.748 "claim_type": "exclusive_write", 00:11:34.748 "zoned": false, 00:11:34.748 "supported_io_types": { 00:11:34.748 "read": true, 00:11:34.748 "write": true, 00:11:34.748 "unmap": true, 00:11:34.748 "flush": true, 00:11:34.748 "reset": true, 00:11:34.748 "nvme_admin": false, 00:11:34.748 "nvme_io": false, 00:11:34.748 "nvme_io_md": false, 00:11:34.748 "write_zeroes": true, 00:11:34.748 "zcopy": true, 00:11:34.748 "get_zone_info": false, 00:11:34.748 "zone_management": false, 00:11:34.748 "zone_append": false, 00:11:34.748 "compare": false, 00:11:34.748 "compare_and_write": false, 00:11:34.748 "abort": true, 00:11:34.748 "seek_hole": false, 00:11:34.748 "seek_data": false, 00:11:34.748 "copy": true, 00:11:34.748 "nvme_iov_md": false 00:11:34.748 }, 00:11:34.748 "memory_domains": [ 00:11:34.748 { 00:11:34.748 "dma_device_id": "system", 00:11:34.748 "dma_device_type": 1 00:11:34.748 }, 00:11:34.748 { 00:11:34.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.748 "dma_device_type": 2 00:11:34.748 } 00:11:34.748 ], 00:11:34.748 "driver_specific": {} 00:11:34.748 } 00:11:34.748 ] 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.748 "name": "Existed_Raid", 00:11:34.748 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:34.748 "strip_size_kb": 64, 00:11:34.748 "state": "online", 00:11:34.748 "raid_level": "concat", 00:11:34.748 "superblock": true, 00:11:34.748 "num_base_bdevs": 4, 00:11:34.748 "num_base_bdevs_discovered": 4, 00:11:34.748 "num_base_bdevs_operational": 4, 00:11:34.748 "base_bdevs_list": [ 00:11:34.748 { 00:11:34.748 "name": "BaseBdev1", 00:11:34.748 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:34.748 "is_configured": true, 00:11:34.748 "data_offset": 2048, 00:11:34.748 "data_size": 63488 00:11:34.748 }, 00:11:34.748 { 00:11:34.748 "name": "BaseBdev2", 00:11:34.748 "uuid": "b6c522e7-1332-4392-b648-4294ed745e83", 00:11:34.748 "is_configured": true, 00:11:34.748 "data_offset": 2048, 00:11:34.748 "data_size": 63488 00:11:34.748 }, 00:11:34.748 { 00:11:34.748 "name": "BaseBdev3", 00:11:34.748 "uuid": "2ef4da32-e4c2-4ee8-b839-d66a6ac80f9e", 00:11:34.748 "is_configured": true, 00:11:34.748 "data_offset": 2048, 00:11:34.748 "data_size": 63488 00:11:34.748 }, 00:11:34.748 { 00:11:34.748 "name": "BaseBdev4", 00:11:34.748 "uuid": "e097438d-ba8c-49e2-b901-82767810202e", 00:11:34.748 "is_configured": true, 00:11:34.748 "data_offset": 2048, 00:11:34.748 "data_size": 63488 00:11:34.748 } 00:11:34.748 ] 00:11:34.748 }' 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.748 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.314 [2024-11-06 12:42:23.725020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.314 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.314 "name": "Existed_Raid", 00:11:35.314 "aliases": [ 00:11:35.314 "2e4abb1f-a49c-403c-a748-83ede24fcd49" 00:11:35.314 ], 00:11:35.314 "product_name": "Raid Volume", 00:11:35.314 "block_size": 512, 00:11:35.314 "num_blocks": 253952, 00:11:35.314 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:35.314 "assigned_rate_limits": { 00:11:35.314 "rw_ios_per_sec": 0, 00:11:35.314 "rw_mbytes_per_sec": 0, 00:11:35.314 "r_mbytes_per_sec": 0, 00:11:35.314 "w_mbytes_per_sec": 0 00:11:35.314 }, 00:11:35.314 "claimed": false, 00:11:35.314 "zoned": false, 00:11:35.314 "supported_io_types": { 00:11:35.314 "read": true, 00:11:35.315 "write": true, 00:11:35.315 "unmap": true, 00:11:35.315 "flush": true, 00:11:35.315 "reset": true, 00:11:35.315 "nvme_admin": false, 00:11:35.315 "nvme_io": false, 00:11:35.315 "nvme_io_md": false, 00:11:35.315 "write_zeroes": true, 00:11:35.315 "zcopy": false, 00:11:35.315 "get_zone_info": false, 00:11:35.315 "zone_management": false, 00:11:35.315 "zone_append": false, 00:11:35.315 "compare": false, 00:11:35.315 "compare_and_write": false, 00:11:35.315 "abort": false, 00:11:35.315 "seek_hole": false, 00:11:35.315 "seek_data": false, 00:11:35.315 "copy": false, 00:11:35.315 "nvme_iov_md": false 00:11:35.315 }, 00:11:35.315 "memory_domains": [ 00:11:35.315 { 00:11:35.315 "dma_device_id": "system", 00:11:35.315 "dma_device_type": 1 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.315 "dma_device_type": 2 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "system", 00:11:35.315 "dma_device_type": 1 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.315 "dma_device_type": 2 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "system", 00:11:35.315 "dma_device_type": 1 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.315 "dma_device_type": 2 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "system", 00:11:35.315 "dma_device_type": 1 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.315 "dma_device_type": 2 00:11:35.315 } 00:11:35.315 ], 00:11:35.315 "driver_specific": { 00:11:35.315 "raid": { 00:11:35.315 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:35.315 "strip_size_kb": 64, 00:11:35.315 "state": "online", 00:11:35.315 "raid_level": "concat", 00:11:35.315 "superblock": true, 00:11:35.315 "num_base_bdevs": 4, 00:11:35.315 "num_base_bdevs_discovered": 4, 00:11:35.315 "num_base_bdevs_operational": 4, 00:11:35.315 "base_bdevs_list": [ 00:11:35.315 { 00:11:35.315 "name": "BaseBdev1", 00:11:35.315 "uuid": "8c473d84-5f34-409f-83f0-2b5b6bf202c9", 00:11:35.315 "is_configured": true, 00:11:35.315 "data_offset": 2048, 00:11:35.315 "data_size": 63488 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "name": "BaseBdev2", 00:11:35.315 "uuid": "b6c522e7-1332-4392-b648-4294ed745e83", 00:11:35.315 "is_configured": true, 00:11:35.315 "data_offset": 2048, 00:11:35.315 "data_size": 63488 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "name": "BaseBdev3", 00:11:35.315 "uuid": "2ef4da32-e4c2-4ee8-b839-d66a6ac80f9e", 00:11:35.315 "is_configured": true, 00:11:35.315 "data_offset": 2048, 00:11:35.315 "data_size": 63488 00:11:35.315 }, 00:11:35.315 { 00:11:35.315 "name": "BaseBdev4", 00:11:35.315 "uuid": "e097438d-ba8c-49e2-b901-82767810202e", 00:11:35.315 "is_configured": true, 00:11:35.315 "data_offset": 2048, 00:11:35.315 "data_size": 63488 00:11:35.315 } 00:11:35.315 ] 00:11:35.315 } 00:11:35.315 } 00:11:35.315 }' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:35.315 BaseBdev2 00:11:35.315 BaseBdev3 00:11:35.315 BaseBdev4' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.315 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.573 12:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.573 [2024-11-06 12:42:24.108787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.573 [2024-11-06 12:42:24.108859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.573 [2024-11-06 12:42:24.108928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.573 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.832 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.832 "name": "Existed_Raid", 00:11:35.832 "uuid": "2e4abb1f-a49c-403c-a748-83ede24fcd49", 00:11:35.832 "strip_size_kb": 64, 00:11:35.832 "state": "offline", 00:11:35.832 "raid_level": "concat", 00:11:35.832 "superblock": true, 00:11:35.832 "num_base_bdevs": 4, 00:11:35.832 "num_base_bdevs_discovered": 3, 00:11:35.832 "num_base_bdevs_operational": 3, 00:11:35.832 "base_bdevs_list": [ 00:11:35.832 { 00:11:35.832 "name": null, 00:11:35.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.832 "is_configured": false, 00:11:35.832 "data_offset": 0, 00:11:35.832 "data_size": 63488 00:11:35.832 }, 00:11:35.832 { 00:11:35.832 "name": "BaseBdev2", 00:11:35.832 "uuid": "b6c522e7-1332-4392-b648-4294ed745e83", 00:11:35.832 "is_configured": true, 00:11:35.832 "data_offset": 2048, 00:11:35.832 "data_size": 63488 00:11:35.832 }, 00:11:35.832 { 00:11:35.832 "name": "BaseBdev3", 00:11:35.832 "uuid": "2ef4da32-e4c2-4ee8-b839-d66a6ac80f9e", 00:11:35.832 "is_configured": true, 00:11:35.832 "data_offset": 2048, 00:11:35.832 "data_size": 63488 00:11:35.832 }, 00:11:35.832 { 00:11:35.832 "name": "BaseBdev4", 00:11:35.832 "uuid": "e097438d-ba8c-49e2-b901-82767810202e", 00:11:35.832 "is_configured": true, 00:11:35.832 "data_offset": 2048, 00:11:35.832 "data_size": 63488 00:11:35.832 } 00:11:35.832 ] 00:11:35.832 }' 00:11:35.832 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.832 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.091 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.373 [2024-11-06 12:42:24.797528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.373 12:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.373 [2024-11-06 12:42:24.948227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.631 [2024-11-06 12:42:25.100145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:36.631 [2024-11-06 12:42:25.100469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.631 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.890 BaseBdev2 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.890 [ 00:11:36.890 { 00:11:36.890 "name": "BaseBdev2", 00:11:36.890 "aliases": [ 00:11:36.890 "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3" 00:11:36.890 ], 00:11:36.890 "product_name": "Malloc disk", 00:11:36.890 "block_size": 512, 00:11:36.890 "num_blocks": 65536, 00:11:36.890 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:36.890 "assigned_rate_limits": { 00:11:36.890 "rw_ios_per_sec": 0, 00:11:36.890 "rw_mbytes_per_sec": 0, 00:11:36.890 "r_mbytes_per_sec": 0, 00:11:36.890 "w_mbytes_per_sec": 0 00:11:36.890 }, 00:11:36.890 "claimed": false, 00:11:36.890 "zoned": false, 00:11:36.890 "supported_io_types": { 00:11:36.890 "read": true, 00:11:36.890 "write": true, 00:11:36.890 "unmap": true, 00:11:36.890 "flush": true, 00:11:36.890 "reset": true, 00:11:36.890 "nvme_admin": false, 00:11:36.890 "nvme_io": false, 00:11:36.890 "nvme_io_md": false, 00:11:36.890 "write_zeroes": true, 00:11:36.890 "zcopy": true, 00:11:36.890 "get_zone_info": false, 00:11:36.890 "zone_management": false, 00:11:36.890 "zone_append": false, 00:11:36.890 "compare": false, 00:11:36.890 "compare_and_write": false, 00:11:36.890 "abort": true, 00:11:36.890 "seek_hole": false, 00:11:36.890 "seek_data": false, 00:11:36.890 "copy": true, 00:11:36.890 "nvme_iov_md": false 00:11:36.890 }, 00:11:36.890 "memory_domains": [ 00:11:36.890 { 00:11:36.890 "dma_device_id": "system", 00:11:36.890 "dma_device_type": 1 00:11:36.890 }, 00:11:36.890 { 00:11:36.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.890 "dma_device_type": 2 00:11:36.890 } 00:11:36.890 ], 00:11:36.890 "driver_specific": {} 00:11:36.890 } 00:11:36.890 ] 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.890 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 BaseBdev3 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 [ 00:11:36.891 { 00:11:36.891 "name": "BaseBdev3", 00:11:36.891 "aliases": [ 00:11:36.891 "187676b1-9fb4-467b-960c-140854795130" 00:11:36.891 ], 00:11:36.891 "product_name": "Malloc disk", 00:11:36.891 "block_size": 512, 00:11:36.891 "num_blocks": 65536, 00:11:36.891 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:36.891 "assigned_rate_limits": { 00:11:36.891 "rw_ios_per_sec": 0, 00:11:36.891 "rw_mbytes_per_sec": 0, 00:11:36.891 "r_mbytes_per_sec": 0, 00:11:36.891 "w_mbytes_per_sec": 0 00:11:36.891 }, 00:11:36.891 "claimed": false, 00:11:36.891 "zoned": false, 00:11:36.891 "supported_io_types": { 00:11:36.891 "read": true, 00:11:36.891 "write": true, 00:11:36.891 "unmap": true, 00:11:36.891 "flush": true, 00:11:36.891 "reset": true, 00:11:36.891 "nvme_admin": false, 00:11:36.891 "nvme_io": false, 00:11:36.891 "nvme_io_md": false, 00:11:36.891 "write_zeroes": true, 00:11:36.891 "zcopy": true, 00:11:36.891 "get_zone_info": false, 00:11:36.891 "zone_management": false, 00:11:36.891 "zone_append": false, 00:11:36.891 "compare": false, 00:11:36.891 "compare_and_write": false, 00:11:36.891 "abort": true, 00:11:36.891 "seek_hole": false, 00:11:36.891 "seek_data": false, 00:11:36.891 "copy": true, 00:11:36.891 "nvme_iov_md": false 00:11:36.891 }, 00:11:36.891 "memory_domains": [ 00:11:36.891 { 00:11:36.891 "dma_device_id": "system", 00:11:36.891 "dma_device_type": 1 00:11:36.891 }, 00:11:36.891 { 00:11:36.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.891 "dma_device_type": 2 00:11:36.891 } 00:11:36.891 ], 00:11:36.891 "driver_specific": {} 00:11:36.891 } 00:11:36.891 ] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 BaseBdev4 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 [ 00:11:36.891 { 00:11:36.891 "name": "BaseBdev4", 00:11:36.891 "aliases": [ 00:11:36.891 "d5981548-c9a5-4eb2-8309-0aac99aceb50" 00:11:36.891 ], 00:11:36.891 "product_name": "Malloc disk", 00:11:36.891 "block_size": 512, 00:11:36.891 "num_blocks": 65536, 00:11:36.891 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:36.891 "assigned_rate_limits": { 00:11:36.891 "rw_ios_per_sec": 0, 00:11:36.891 "rw_mbytes_per_sec": 0, 00:11:36.891 "r_mbytes_per_sec": 0, 00:11:36.891 "w_mbytes_per_sec": 0 00:11:36.891 }, 00:11:36.891 "claimed": false, 00:11:36.891 "zoned": false, 00:11:36.891 "supported_io_types": { 00:11:36.891 "read": true, 00:11:36.891 "write": true, 00:11:36.891 "unmap": true, 00:11:36.891 "flush": true, 00:11:36.891 "reset": true, 00:11:36.891 "nvme_admin": false, 00:11:36.891 "nvme_io": false, 00:11:36.891 "nvme_io_md": false, 00:11:36.891 "write_zeroes": true, 00:11:36.891 "zcopy": true, 00:11:36.891 "get_zone_info": false, 00:11:36.891 "zone_management": false, 00:11:36.891 "zone_append": false, 00:11:36.891 "compare": false, 00:11:36.891 "compare_and_write": false, 00:11:36.891 "abort": true, 00:11:36.891 "seek_hole": false, 00:11:36.891 "seek_data": false, 00:11:36.891 "copy": true, 00:11:36.891 "nvme_iov_md": false 00:11:36.891 }, 00:11:36.891 "memory_domains": [ 00:11:36.891 { 00:11:36.891 "dma_device_id": "system", 00:11:36.891 "dma_device_type": 1 00:11:36.891 }, 00:11:36.891 { 00:11:36.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.891 "dma_device_type": 2 00:11:36.891 } 00:11:36.891 ], 00:11:36.891 "driver_specific": {} 00:11:36.891 } 00:11:36.891 ] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 [2024-11-06 12:42:25.480432] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.891 [2024-11-06 12:42:25.480680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.891 [2024-11-06 12:42:25.480838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.891 [2024-11-06 12:42:25.483154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.891 [2024-11-06 12:42:25.483412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.891 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.891 "name": "Existed_Raid", 00:11:36.891 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:36.891 "strip_size_kb": 64, 00:11:36.891 "state": "configuring", 00:11:36.891 "raid_level": "concat", 00:11:36.891 "superblock": true, 00:11:36.891 "num_base_bdevs": 4, 00:11:36.891 "num_base_bdevs_discovered": 3, 00:11:36.891 "num_base_bdevs_operational": 4, 00:11:36.891 "base_bdevs_list": [ 00:11:36.891 { 00:11:36.891 "name": "BaseBdev1", 00:11:36.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.892 "is_configured": false, 00:11:36.892 "data_offset": 0, 00:11:36.892 "data_size": 0 00:11:36.892 }, 00:11:36.892 { 00:11:36.892 "name": "BaseBdev2", 00:11:36.892 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:36.892 "is_configured": true, 00:11:36.892 "data_offset": 2048, 00:11:36.892 "data_size": 63488 00:11:36.892 }, 00:11:36.892 { 00:11:36.892 "name": "BaseBdev3", 00:11:36.892 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:36.892 "is_configured": true, 00:11:36.892 "data_offset": 2048, 00:11:36.892 "data_size": 63488 00:11:36.892 }, 00:11:36.892 { 00:11:36.892 "name": "BaseBdev4", 00:11:36.892 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:36.892 "is_configured": true, 00:11:36.892 "data_offset": 2048, 00:11:36.892 "data_size": 63488 00:11:36.892 } 00:11:36.892 ] 00:11:36.892 }' 00:11:36.892 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.892 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.458 [2024-11-06 12:42:25.972586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.458 12:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.458 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.458 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.458 "name": "Existed_Raid", 00:11:37.458 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:37.458 "strip_size_kb": 64, 00:11:37.458 "state": "configuring", 00:11:37.458 "raid_level": "concat", 00:11:37.458 "superblock": true, 00:11:37.458 "num_base_bdevs": 4, 00:11:37.458 "num_base_bdevs_discovered": 2, 00:11:37.458 "num_base_bdevs_operational": 4, 00:11:37.458 "base_bdevs_list": [ 00:11:37.458 { 00:11:37.458 "name": "BaseBdev1", 00:11:37.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.458 "is_configured": false, 00:11:37.458 "data_offset": 0, 00:11:37.458 "data_size": 0 00:11:37.458 }, 00:11:37.458 { 00:11:37.458 "name": null, 00:11:37.458 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:37.458 "is_configured": false, 00:11:37.458 "data_offset": 0, 00:11:37.458 "data_size": 63488 00:11:37.458 }, 00:11:37.458 { 00:11:37.458 "name": "BaseBdev3", 00:11:37.458 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:37.458 "is_configured": true, 00:11:37.458 "data_offset": 2048, 00:11:37.458 "data_size": 63488 00:11:37.458 }, 00:11:37.458 { 00:11:37.458 "name": "BaseBdev4", 00:11:37.458 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:37.458 "is_configured": true, 00:11:37.458 "data_offset": 2048, 00:11:37.458 "data_size": 63488 00:11:37.458 } 00:11:37.458 ] 00:11:37.458 }' 00:11:37.458 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.458 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 [2024-11-06 12:42:26.576732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.025 BaseBdev1 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.025 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.025 [ 00:11:38.025 { 00:11:38.025 "name": "BaseBdev1", 00:11:38.025 "aliases": [ 00:11:38.025 "cb582973-d3db-4d23-a25c-0c3c2af9a797" 00:11:38.025 ], 00:11:38.025 "product_name": "Malloc disk", 00:11:38.025 "block_size": 512, 00:11:38.025 "num_blocks": 65536, 00:11:38.025 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:38.025 "assigned_rate_limits": { 00:11:38.025 "rw_ios_per_sec": 0, 00:11:38.025 "rw_mbytes_per_sec": 0, 00:11:38.025 "r_mbytes_per_sec": 0, 00:11:38.025 "w_mbytes_per_sec": 0 00:11:38.025 }, 00:11:38.025 "claimed": true, 00:11:38.025 "claim_type": "exclusive_write", 00:11:38.025 "zoned": false, 00:11:38.025 "supported_io_types": { 00:11:38.025 "read": true, 00:11:38.025 "write": true, 00:11:38.025 "unmap": true, 00:11:38.025 "flush": true, 00:11:38.025 "reset": true, 00:11:38.025 "nvme_admin": false, 00:11:38.025 "nvme_io": false, 00:11:38.025 "nvme_io_md": false, 00:11:38.025 "write_zeroes": true, 00:11:38.025 "zcopy": true, 00:11:38.025 "get_zone_info": false, 00:11:38.025 "zone_management": false, 00:11:38.025 "zone_append": false, 00:11:38.025 "compare": false, 00:11:38.025 "compare_and_write": false, 00:11:38.025 "abort": true, 00:11:38.025 "seek_hole": false, 00:11:38.025 "seek_data": false, 00:11:38.025 "copy": true, 00:11:38.025 "nvme_iov_md": false 00:11:38.025 }, 00:11:38.025 "memory_domains": [ 00:11:38.025 { 00:11:38.025 "dma_device_id": "system", 00:11:38.025 "dma_device_type": 1 00:11:38.025 }, 00:11:38.025 { 00:11:38.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.026 "dma_device_type": 2 00:11:38.026 } 00:11:38.026 ], 00:11:38.026 "driver_specific": {} 00:11:38.026 } 00:11:38.026 ] 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.026 "name": "Existed_Raid", 00:11:38.026 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:38.026 "strip_size_kb": 64, 00:11:38.026 "state": "configuring", 00:11:38.026 "raid_level": "concat", 00:11:38.026 "superblock": true, 00:11:38.026 "num_base_bdevs": 4, 00:11:38.026 "num_base_bdevs_discovered": 3, 00:11:38.026 "num_base_bdevs_operational": 4, 00:11:38.026 "base_bdevs_list": [ 00:11:38.026 { 00:11:38.026 "name": "BaseBdev1", 00:11:38.026 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:38.026 "is_configured": true, 00:11:38.026 "data_offset": 2048, 00:11:38.026 "data_size": 63488 00:11:38.026 }, 00:11:38.026 { 00:11:38.026 "name": null, 00:11:38.026 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:38.026 "is_configured": false, 00:11:38.026 "data_offset": 0, 00:11:38.026 "data_size": 63488 00:11:38.026 }, 00:11:38.026 { 00:11:38.026 "name": "BaseBdev3", 00:11:38.026 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:38.026 "is_configured": true, 00:11:38.026 "data_offset": 2048, 00:11:38.026 "data_size": 63488 00:11:38.026 }, 00:11:38.026 { 00:11:38.026 "name": "BaseBdev4", 00:11:38.026 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:38.026 "is_configured": true, 00:11:38.026 "data_offset": 2048, 00:11:38.026 "data_size": 63488 00:11:38.026 } 00:11:38.026 ] 00:11:38.026 }' 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.026 12:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.592 [2024-11-06 12:42:27.180998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.592 "name": "Existed_Raid", 00:11:38.592 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:38.592 "strip_size_kb": 64, 00:11:38.592 "state": "configuring", 00:11:38.592 "raid_level": "concat", 00:11:38.592 "superblock": true, 00:11:38.592 "num_base_bdevs": 4, 00:11:38.592 "num_base_bdevs_discovered": 2, 00:11:38.592 "num_base_bdevs_operational": 4, 00:11:38.592 "base_bdevs_list": [ 00:11:38.592 { 00:11:38.592 "name": "BaseBdev1", 00:11:38.592 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:38.592 "is_configured": true, 00:11:38.592 "data_offset": 2048, 00:11:38.592 "data_size": 63488 00:11:38.592 }, 00:11:38.592 { 00:11:38.592 "name": null, 00:11:38.592 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:38.592 "is_configured": false, 00:11:38.592 "data_offset": 0, 00:11:38.592 "data_size": 63488 00:11:38.592 }, 00:11:38.592 { 00:11:38.592 "name": null, 00:11:38.592 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:38.592 "is_configured": false, 00:11:38.592 "data_offset": 0, 00:11:38.592 "data_size": 63488 00:11:38.592 }, 00:11:38.592 { 00:11:38.592 "name": "BaseBdev4", 00:11:38.592 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:38.592 "is_configured": true, 00:11:38.592 "data_offset": 2048, 00:11:38.592 "data_size": 63488 00:11:38.592 } 00:11:38.592 ] 00:11:38.592 }' 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.592 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.158 [2024-11-06 12:42:27.749102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.158 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.158 "name": "Existed_Raid", 00:11:39.158 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:39.158 "strip_size_kb": 64, 00:11:39.158 "state": "configuring", 00:11:39.158 "raid_level": "concat", 00:11:39.158 "superblock": true, 00:11:39.158 "num_base_bdevs": 4, 00:11:39.158 "num_base_bdevs_discovered": 3, 00:11:39.158 "num_base_bdevs_operational": 4, 00:11:39.158 "base_bdevs_list": [ 00:11:39.158 { 00:11:39.158 "name": "BaseBdev1", 00:11:39.158 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:39.158 "is_configured": true, 00:11:39.158 "data_offset": 2048, 00:11:39.158 "data_size": 63488 00:11:39.158 }, 00:11:39.158 { 00:11:39.158 "name": null, 00:11:39.158 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:39.158 "is_configured": false, 00:11:39.158 "data_offset": 0, 00:11:39.158 "data_size": 63488 00:11:39.158 }, 00:11:39.158 { 00:11:39.158 "name": "BaseBdev3", 00:11:39.158 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:39.158 "is_configured": true, 00:11:39.158 "data_offset": 2048, 00:11:39.158 "data_size": 63488 00:11:39.158 }, 00:11:39.158 { 00:11:39.158 "name": "BaseBdev4", 00:11:39.158 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:39.158 "is_configured": true, 00:11:39.158 "data_offset": 2048, 00:11:39.158 "data_size": 63488 00:11:39.158 } 00:11:39.159 ] 00:11:39.159 }' 00:11:39.159 12:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.159 12:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.726 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.726 [2024-11-06 12:42:28.337329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.985 "name": "Existed_Raid", 00:11:39.985 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:39.985 "strip_size_kb": 64, 00:11:39.985 "state": "configuring", 00:11:39.985 "raid_level": "concat", 00:11:39.985 "superblock": true, 00:11:39.985 "num_base_bdevs": 4, 00:11:39.985 "num_base_bdevs_discovered": 2, 00:11:39.985 "num_base_bdevs_operational": 4, 00:11:39.985 "base_bdevs_list": [ 00:11:39.985 { 00:11:39.985 "name": null, 00:11:39.985 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:39.985 "is_configured": false, 00:11:39.985 "data_offset": 0, 00:11:39.985 "data_size": 63488 00:11:39.985 }, 00:11:39.985 { 00:11:39.985 "name": null, 00:11:39.985 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:39.985 "is_configured": false, 00:11:39.985 "data_offset": 0, 00:11:39.985 "data_size": 63488 00:11:39.985 }, 00:11:39.985 { 00:11:39.985 "name": "BaseBdev3", 00:11:39.985 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:39.985 "is_configured": true, 00:11:39.985 "data_offset": 2048, 00:11:39.985 "data_size": 63488 00:11:39.985 }, 00:11:39.985 { 00:11:39.985 "name": "BaseBdev4", 00:11:39.985 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:39.985 "is_configured": true, 00:11:39.985 "data_offset": 2048, 00:11:39.985 "data_size": 63488 00:11:39.985 } 00:11:39.985 ] 00:11:39.985 }' 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.985 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.552 [2024-11-06 12:42:28.979262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.552 12:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.552 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.552 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.552 "name": "Existed_Raid", 00:11:40.552 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:40.552 "strip_size_kb": 64, 00:11:40.552 "state": "configuring", 00:11:40.552 "raid_level": "concat", 00:11:40.552 "superblock": true, 00:11:40.552 "num_base_bdevs": 4, 00:11:40.552 "num_base_bdevs_discovered": 3, 00:11:40.552 "num_base_bdevs_operational": 4, 00:11:40.552 "base_bdevs_list": [ 00:11:40.552 { 00:11:40.552 "name": null, 00:11:40.552 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:40.552 "is_configured": false, 00:11:40.552 "data_offset": 0, 00:11:40.552 "data_size": 63488 00:11:40.552 }, 00:11:40.552 { 00:11:40.552 "name": "BaseBdev2", 00:11:40.552 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:40.552 "is_configured": true, 00:11:40.552 "data_offset": 2048, 00:11:40.552 "data_size": 63488 00:11:40.552 }, 00:11:40.552 { 00:11:40.552 "name": "BaseBdev3", 00:11:40.552 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:40.552 "is_configured": true, 00:11:40.552 "data_offset": 2048, 00:11:40.552 "data_size": 63488 00:11:40.552 }, 00:11:40.552 { 00:11:40.552 "name": "BaseBdev4", 00:11:40.552 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:40.552 "is_configured": true, 00:11:40.552 "data_offset": 2048, 00:11:40.552 "data_size": 63488 00:11:40.552 } 00:11:40.552 ] 00:11:40.552 }' 00:11:40.552 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.552 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cb582973-d3db-4d23-a25c-0c3c2af9a797 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 [2024-11-06 12:42:29.640205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.130 [2024-11-06 12:42:29.640516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.130 [2024-11-06 12:42:29.640535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.130 [2024-11-06 12:42:29.640839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:41.130 [2024-11-06 12:42:29.641007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.130 [2024-11-06 12:42:29.641026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.130 NewBaseBdev 00:11:41.130 [2024-11-06 12:42:29.641185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 [ 00:11:41.130 { 00:11:41.130 "name": "NewBaseBdev", 00:11:41.130 "aliases": [ 00:11:41.130 "cb582973-d3db-4d23-a25c-0c3c2af9a797" 00:11:41.130 ], 00:11:41.130 "product_name": "Malloc disk", 00:11:41.130 "block_size": 512, 00:11:41.130 "num_blocks": 65536, 00:11:41.130 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:41.130 "assigned_rate_limits": { 00:11:41.130 "rw_ios_per_sec": 0, 00:11:41.130 "rw_mbytes_per_sec": 0, 00:11:41.130 "r_mbytes_per_sec": 0, 00:11:41.130 "w_mbytes_per_sec": 0 00:11:41.130 }, 00:11:41.130 "claimed": true, 00:11:41.130 "claim_type": "exclusive_write", 00:11:41.130 "zoned": false, 00:11:41.130 "supported_io_types": { 00:11:41.130 "read": true, 00:11:41.130 "write": true, 00:11:41.130 "unmap": true, 00:11:41.130 "flush": true, 00:11:41.130 "reset": true, 00:11:41.130 "nvme_admin": false, 00:11:41.130 "nvme_io": false, 00:11:41.130 "nvme_io_md": false, 00:11:41.130 "write_zeroes": true, 00:11:41.130 "zcopy": true, 00:11:41.130 "get_zone_info": false, 00:11:41.130 "zone_management": false, 00:11:41.130 "zone_append": false, 00:11:41.130 "compare": false, 00:11:41.130 "compare_and_write": false, 00:11:41.130 "abort": true, 00:11:41.130 "seek_hole": false, 00:11:41.130 "seek_data": false, 00:11:41.130 "copy": true, 00:11:41.130 "nvme_iov_md": false 00:11:41.130 }, 00:11:41.130 "memory_domains": [ 00:11:41.130 { 00:11:41.130 "dma_device_id": "system", 00:11:41.130 "dma_device_type": 1 00:11:41.130 }, 00:11:41.130 { 00:11:41.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.130 "dma_device_type": 2 00:11:41.130 } 00:11:41.130 ], 00:11:41.130 "driver_specific": {} 00:11:41.130 } 00:11:41.130 ] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.130 "name": "Existed_Raid", 00:11:41.130 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:41.130 "strip_size_kb": 64, 00:11:41.130 "state": "online", 00:11:41.130 "raid_level": "concat", 00:11:41.130 "superblock": true, 00:11:41.130 "num_base_bdevs": 4, 00:11:41.130 "num_base_bdevs_discovered": 4, 00:11:41.130 "num_base_bdevs_operational": 4, 00:11:41.130 "base_bdevs_list": [ 00:11:41.130 { 00:11:41.130 "name": "NewBaseBdev", 00:11:41.130 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:41.130 "is_configured": true, 00:11:41.130 "data_offset": 2048, 00:11:41.130 "data_size": 63488 00:11:41.130 }, 00:11:41.130 { 00:11:41.130 "name": "BaseBdev2", 00:11:41.130 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:41.130 "is_configured": true, 00:11:41.130 "data_offset": 2048, 00:11:41.130 "data_size": 63488 00:11:41.130 }, 00:11:41.130 { 00:11:41.130 "name": "BaseBdev3", 00:11:41.130 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:41.130 "is_configured": true, 00:11:41.130 "data_offset": 2048, 00:11:41.130 "data_size": 63488 00:11:41.130 }, 00:11:41.130 { 00:11:41.130 "name": "BaseBdev4", 00:11:41.130 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:41.130 "is_configured": true, 00:11:41.130 "data_offset": 2048, 00:11:41.130 "data_size": 63488 00:11:41.130 } 00:11:41.130 ] 00:11:41.130 }' 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.130 12:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.695 [2024-11-06 12:42:30.196942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.695 "name": "Existed_Raid", 00:11:41.695 "aliases": [ 00:11:41.695 "7a3d0483-7e1c-433c-975d-fa60c09c4317" 00:11:41.695 ], 00:11:41.695 "product_name": "Raid Volume", 00:11:41.695 "block_size": 512, 00:11:41.695 "num_blocks": 253952, 00:11:41.695 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:41.695 "assigned_rate_limits": { 00:11:41.695 "rw_ios_per_sec": 0, 00:11:41.695 "rw_mbytes_per_sec": 0, 00:11:41.695 "r_mbytes_per_sec": 0, 00:11:41.695 "w_mbytes_per_sec": 0 00:11:41.695 }, 00:11:41.695 "claimed": false, 00:11:41.695 "zoned": false, 00:11:41.695 "supported_io_types": { 00:11:41.695 "read": true, 00:11:41.695 "write": true, 00:11:41.695 "unmap": true, 00:11:41.695 "flush": true, 00:11:41.695 "reset": true, 00:11:41.695 "nvme_admin": false, 00:11:41.695 "nvme_io": false, 00:11:41.695 "nvme_io_md": false, 00:11:41.695 "write_zeroes": true, 00:11:41.695 "zcopy": false, 00:11:41.695 "get_zone_info": false, 00:11:41.695 "zone_management": false, 00:11:41.695 "zone_append": false, 00:11:41.695 "compare": false, 00:11:41.695 "compare_and_write": false, 00:11:41.695 "abort": false, 00:11:41.695 "seek_hole": false, 00:11:41.695 "seek_data": false, 00:11:41.695 "copy": false, 00:11:41.695 "nvme_iov_md": false 00:11:41.695 }, 00:11:41.695 "memory_domains": [ 00:11:41.695 { 00:11:41.695 "dma_device_id": "system", 00:11:41.695 "dma_device_type": 1 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.695 "dma_device_type": 2 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "system", 00:11:41.695 "dma_device_type": 1 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.695 "dma_device_type": 2 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "system", 00:11:41.695 "dma_device_type": 1 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.695 "dma_device_type": 2 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "system", 00:11:41.695 "dma_device_type": 1 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.695 "dma_device_type": 2 00:11:41.695 } 00:11:41.695 ], 00:11:41.695 "driver_specific": { 00:11:41.695 "raid": { 00:11:41.695 "uuid": "7a3d0483-7e1c-433c-975d-fa60c09c4317", 00:11:41.695 "strip_size_kb": 64, 00:11:41.695 "state": "online", 00:11:41.695 "raid_level": "concat", 00:11:41.695 "superblock": true, 00:11:41.695 "num_base_bdevs": 4, 00:11:41.695 "num_base_bdevs_discovered": 4, 00:11:41.695 "num_base_bdevs_operational": 4, 00:11:41.695 "base_bdevs_list": [ 00:11:41.695 { 00:11:41.695 "name": "NewBaseBdev", 00:11:41.695 "uuid": "cb582973-d3db-4d23-a25c-0c3c2af9a797", 00:11:41.695 "is_configured": true, 00:11:41.695 "data_offset": 2048, 00:11:41.695 "data_size": 63488 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "name": "BaseBdev2", 00:11:41.695 "uuid": "9b9856b1-e04e-4cbb-878b-8bc1435b2fa3", 00:11:41.695 "is_configured": true, 00:11:41.695 "data_offset": 2048, 00:11:41.695 "data_size": 63488 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "name": "BaseBdev3", 00:11:41.695 "uuid": "187676b1-9fb4-467b-960c-140854795130", 00:11:41.695 "is_configured": true, 00:11:41.695 "data_offset": 2048, 00:11:41.695 "data_size": 63488 00:11:41.695 }, 00:11:41.695 { 00:11:41.695 "name": "BaseBdev4", 00:11:41.695 "uuid": "d5981548-c9a5-4eb2-8309-0aac99aceb50", 00:11:41.695 "is_configured": true, 00:11:41.695 "data_offset": 2048, 00:11:41.695 "data_size": 63488 00:11:41.695 } 00:11:41.695 ] 00:11:41.695 } 00:11:41.695 } 00:11:41.695 }' 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.695 BaseBdev2 00:11:41.695 BaseBdev3 00:11:41.695 BaseBdev4' 00:11:41.695 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.953 [2024-11-06 12:42:30.580557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.953 [2024-11-06 12:42:30.580637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.953 [2024-11-06 12:42:30.580762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.953 [2024-11-06 12:42:30.580859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.953 [2024-11-06 12:42:30.580876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72100 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72100 ']' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72100 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:41.953 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72100 00:11:42.212 killing process with pid 72100 00:11:42.212 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.212 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.212 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72100' 00:11:42.212 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72100 00:11:42.212 [2024-11-06 12:42:30.617722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.212 12:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72100 00:11:42.470 [2024-11-06 12:42:30.971266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.406 12:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.406 00:11:43.406 real 0m12.830s 00:11:43.406 user 0m21.177s 00:11:43.406 sys 0m1.901s 00:11:43.406 12:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.406 12:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.406 ************************************ 00:11:43.406 END TEST raid_state_function_test_sb 00:11:43.406 ************************************ 00:11:43.406 12:42:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:43.406 12:42:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:43.406 12:42:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.406 12:42:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.406 ************************************ 00:11:43.406 START TEST raid_superblock_test 00:11:43.406 ************************************ 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72786 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72786 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72786 ']' 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.406 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.665 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.665 12:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.665 [2024-11-06 12:42:32.169736] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:43.665 [2024-11-06 12:42:32.170125] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72786 ] 00:11:43.924 [2024-11-06 12:42:32.355695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.924 [2024-11-06 12:42:32.486204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.182 [2024-11-06 12:42:32.698920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.182 [2024-11-06 12:42:32.698981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.749 malloc1 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.749 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.749 [2024-11-06 12:42:33.211254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.749 [2024-11-06 12:42:33.211368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.749 [2024-11-06 12:42:33.211406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:44.749 [2024-11-06 12:42:33.211422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.749 [2024-11-06 12:42:33.214109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.750 [2024-11-06 12:42:33.214473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.750 pt1 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 malloc2 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 [2024-11-06 12:42:33.267868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.750 [2024-11-06 12:42:33.268121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.750 [2024-11-06 12:42:33.268169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:44.750 [2024-11-06 12:42:33.268186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.750 [2024-11-06 12:42:33.270913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.750 [2024-11-06 12:42:33.270958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.750 pt2 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 malloc3 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 [2024-11-06 12:42:33.329412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.750 [2024-11-06 12:42:33.329496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.750 [2024-11-06 12:42:33.329552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:44.750 [2024-11-06 12:42:33.329569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.750 [2024-11-06 12:42:33.332314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.750 [2024-11-06 12:42:33.332595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.750 pt3 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 malloc4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 [2024-11-06 12:42:33.381293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:44.750 [2024-11-06 12:42:33.381372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.750 [2024-11-06 12:42:33.381403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:44.750 [2024-11-06 12:42:33.381418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.750 [2024-11-06 12:42:33.384120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.750 [2024-11-06 12:42:33.384404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:44.750 pt4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.750 [2024-11-06 12:42:33.393338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:44.750 [2024-11-06 12:42:33.395746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.750 [2024-11-06 12:42:33.395983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:44.750 [2024-11-06 12:42:33.396096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:44.750 [2024-11-06 12:42:33.396372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:44.750 [2024-11-06 12:42:33.396392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.750 [2024-11-06 12:42:33.396715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.750 [2024-11-06 12:42:33.396917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:44.750 [2024-11-06 12:42:33.396938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:44.750 [2024-11-06 12:42:33.397112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.750 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.009 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.009 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.009 "name": "raid_bdev1", 00:11:45.009 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:45.009 "strip_size_kb": 64, 00:11:45.009 "state": "online", 00:11:45.009 "raid_level": "concat", 00:11:45.009 "superblock": true, 00:11:45.009 "num_base_bdevs": 4, 00:11:45.009 "num_base_bdevs_discovered": 4, 00:11:45.009 "num_base_bdevs_operational": 4, 00:11:45.009 "base_bdevs_list": [ 00:11:45.009 { 00:11:45.009 "name": "pt1", 00:11:45.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.009 "is_configured": true, 00:11:45.009 "data_offset": 2048, 00:11:45.009 "data_size": 63488 00:11:45.009 }, 00:11:45.009 { 00:11:45.009 "name": "pt2", 00:11:45.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.009 "is_configured": true, 00:11:45.009 "data_offset": 2048, 00:11:45.009 "data_size": 63488 00:11:45.009 }, 00:11:45.009 { 00:11:45.009 "name": "pt3", 00:11:45.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.009 "is_configured": true, 00:11:45.009 "data_offset": 2048, 00:11:45.009 "data_size": 63488 00:11:45.009 }, 00:11:45.009 { 00:11:45.009 "name": "pt4", 00:11:45.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.009 "is_configured": true, 00:11:45.009 "data_offset": 2048, 00:11:45.009 "data_size": 63488 00:11:45.009 } 00:11:45.009 ] 00:11:45.009 }' 00:11:45.009 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.009 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.299 [2024-11-06 12:42:33.921907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.299 12:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.557 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.557 "name": "raid_bdev1", 00:11:45.557 "aliases": [ 00:11:45.557 "73acef3d-6e6b-468f-b670-4b784a55adc6" 00:11:45.557 ], 00:11:45.557 "product_name": "Raid Volume", 00:11:45.557 "block_size": 512, 00:11:45.557 "num_blocks": 253952, 00:11:45.557 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:45.557 "assigned_rate_limits": { 00:11:45.557 "rw_ios_per_sec": 0, 00:11:45.557 "rw_mbytes_per_sec": 0, 00:11:45.557 "r_mbytes_per_sec": 0, 00:11:45.557 "w_mbytes_per_sec": 0 00:11:45.557 }, 00:11:45.557 "claimed": false, 00:11:45.557 "zoned": false, 00:11:45.557 "supported_io_types": { 00:11:45.557 "read": true, 00:11:45.557 "write": true, 00:11:45.557 "unmap": true, 00:11:45.557 "flush": true, 00:11:45.557 "reset": true, 00:11:45.557 "nvme_admin": false, 00:11:45.557 "nvme_io": false, 00:11:45.557 "nvme_io_md": false, 00:11:45.557 "write_zeroes": true, 00:11:45.557 "zcopy": false, 00:11:45.557 "get_zone_info": false, 00:11:45.557 "zone_management": false, 00:11:45.557 "zone_append": false, 00:11:45.557 "compare": false, 00:11:45.557 "compare_and_write": false, 00:11:45.557 "abort": false, 00:11:45.557 "seek_hole": false, 00:11:45.557 "seek_data": false, 00:11:45.557 "copy": false, 00:11:45.557 "nvme_iov_md": false 00:11:45.557 }, 00:11:45.557 "memory_domains": [ 00:11:45.557 { 00:11:45.557 "dma_device_id": "system", 00:11:45.557 "dma_device_type": 1 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.557 "dma_device_type": 2 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "system", 00:11:45.557 "dma_device_type": 1 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.557 "dma_device_type": 2 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "system", 00:11:45.557 "dma_device_type": 1 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.557 "dma_device_type": 2 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "system", 00:11:45.557 "dma_device_type": 1 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.557 "dma_device_type": 2 00:11:45.557 } 00:11:45.557 ], 00:11:45.557 "driver_specific": { 00:11:45.557 "raid": { 00:11:45.557 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:45.557 "strip_size_kb": 64, 00:11:45.557 "state": "online", 00:11:45.557 "raid_level": "concat", 00:11:45.558 "superblock": true, 00:11:45.558 "num_base_bdevs": 4, 00:11:45.558 "num_base_bdevs_discovered": 4, 00:11:45.558 "num_base_bdevs_operational": 4, 00:11:45.558 "base_bdevs_list": [ 00:11:45.558 { 00:11:45.558 "name": "pt1", 00:11:45.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.558 "is_configured": true, 00:11:45.558 "data_offset": 2048, 00:11:45.558 "data_size": 63488 00:11:45.558 }, 00:11:45.558 { 00:11:45.558 "name": "pt2", 00:11:45.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.558 "is_configured": true, 00:11:45.558 "data_offset": 2048, 00:11:45.558 "data_size": 63488 00:11:45.558 }, 00:11:45.558 { 00:11:45.558 "name": "pt3", 00:11:45.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.558 "is_configured": true, 00:11:45.558 "data_offset": 2048, 00:11:45.558 "data_size": 63488 00:11:45.558 }, 00:11:45.558 { 00:11:45.558 "name": "pt4", 00:11:45.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.558 "is_configured": true, 00:11:45.558 "data_offset": 2048, 00:11:45.558 "data_size": 63488 00:11:45.558 } 00:11:45.558 ] 00:11:45.558 } 00:11:45.558 } 00:11:45.558 }' 00:11:45.558 12:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.558 pt2 00:11:45.558 pt3 00:11:45.558 pt4' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.558 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 [2024-11-06 12:42:34.297941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73acef3d-6e6b-468f-b670-4b784a55adc6 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73acef3d-6e6b-468f-b670-4b784a55adc6 ']' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 [2024-11-06 12:42:34.345598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.817 [2024-11-06 12:42:34.345639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.817 [2024-11-06 12:42:34.345745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.817 [2024-11-06 12:42:34.345839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.817 [2024-11-06 12:42:34.345864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.817 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.076 [2024-11-06 12:42:34.489652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.076 [2024-11-06 12:42:34.492413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.076 [2024-11-06 12:42:34.492524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:46.076 [2024-11-06 12:42:34.492685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:46.076 [2024-11-06 12:42:34.492805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.076 [2024-11-06 12:42:34.493071] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.076 [2024-11-06 12:42:34.493255] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:46.076 [2024-11-06 12:42:34.493461] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:46.076 [2024-11-06 12:42:34.493689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.076 [2024-11-06 12:42:34.493896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:46.076 request: 00:11:46.076 { 00:11:46.076 "name": "raid_bdev1", 00:11:46.076 "raid_level": "concat", 00:11:46.076 "base_bdevs": [ 00:11:46.076 "malloc1", 00:11:46.076 "malloc2", 00:11:46.076 "malloc3", 00:11:46.076 "malloc4" 00:11:46.076 ], 00:11:46.076 "strip_size_kb": 64, 00:11:46.076 "superblock": false, 00:11:46.076 "method": "bdev_raid_create", 00:11:46.076 "req_id": 1 00:11:46.076 } 00:11:46.076 Got JSON-RPC error response 00:11:46.076 response: 00:11:46.076 { 00:11:46.076 "code": -17, 00:11:46.076 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.076 } 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.076 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.076 [2024-11-06 12:42:34.554354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.076 [2024-11-06 12:42:34.554456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.076 [2024-11-06 12:42:34.554486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:46.077 [2024-11-06 12:42:34.554504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.077 [2024-11-06 12:42:34.557438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.077 [2024-11-06 12:42:34.557492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.077 [2024-11-06 12:42:34.557613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:46.077 [2024-11-06 12:42:34.557697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.077 pt1 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.077 "name": "raid_bdev1", 00:11:46.077 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:46.077 "strip_size_kb": 64, 00:11:46.077 "state": "configuring", 00:11:46.077 "raid_level": "concat", 00:11:46.077 "superblock": true, 00:11:46.077 "num_base_bdevs": 4, 00:11:46.077 "num_base_bdevs_discovered": 1, 00:11:46.077 "num_base_bdevs_operational": 4, 00:11:46.077 "base_bdevs_list": [ 00:11:46.077 { 00:11:46.077 "name": "pt1", 00:11:46.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.077 "is_configured": true, 00:11:46.077 "data_offset": 2048, 00:11:46.077 "data_size": 63488 00:11:46.077 }, 00:11:46.077 { 00:11:46.077 "name": null, 00:11:46.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.077 "is_configured": false, 00:11:46.077 "data_offset": 2048, 00:11:46.077 "data_size": 63488 00:11:46.077 }, 00:11:46.077 { 00:11:46.077 "name": null, 00:11:46.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.077 "is_configured": false, 00:11:46.077 "data_offset": 2048, 00:11:46.077 "data_size": 63488 00:11:46.077 }, 00:11:46.077 { 00:11:46.077 "name": null, 00:11:46.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.077 "is_configured": false, 00:11:46.077 "data_offset": 2048, 00:11:46.077 "data_size": 63488 00:11:46.077 } 00:11:46.077 ] 00:11:46.077 }' 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.077 12:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.644 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:46.644 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.644 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.644 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.644 [2024-11-06 12:42:35.050483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.644 [2024-11-06 12:42:35.050920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.644 [2024-11-06 12:42:35.050995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:46.644 [2024-11-06 12:42:35.051254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.644 [2024-11-06 12:42:35.051884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.644 [2024-11-06 12:42:35.052078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.644 [2024-11-06 12:42:35.052328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.644 [2024-11-06 12:42:35.052377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.644 pt2 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.645 [2024-11-06 12:42:35.058476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.645 "name": "raid_bdev1", 00:11:46.645 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:46.645 "strip_size_kb": 64, 00:11:46.645 "state": "configuring", 00:11:46.645 "raid_level": "concat", 00:11:46.645 "superblock": true, 00:11:46.645 "num_base_bdevs": 4, 00:11:46.645 "num_base_bdevs_discovered": 1, 00:11:46.645 "num_base_bdevs_operational": 4, 00:11:46.645 "base_bdevs_list": [ 00:11:46.645 { 00:11:46.645 "name": "pt1", 00:11:46.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.645 "is_configured": true, 00:11:46.645 "data_offset": 2048, 00:11:46.645 "data_size": 63488 00:11:46.645 }, 00:11:46.645 { 00:11:46.645 "name": null, 00:11:46.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.645 "is_configured": false, 00:11:46.645 "data_offset": 0, 00:11:46.645 "data_size": 63488 00:11:46.645 }, 00:11:46.645 { 00:11:46.645 "name": null, 00:11:46.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.645 "is_configured": false, 00:11:46.645 "data_offset": 2048, 00:11:46.645 "data_size": 63488 00:11:46.645 }, 00:11:46.645 { 00:11:46.645 "name": null, 00:11:46.645 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.645 "is_configured": false, 00:11:46.645 "data_offset": 2048, 00:11:46.645 "data_size": 63488 00:11:46.645 } 00:11:46.645 ] 00:11:46.645 }' 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.645 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.213 [2024-11-06 12:42:35.566625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.213 [2024-11-06 12:42:35.566735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.213 [2024-11-06 12:42:35.566769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:47.213 [2024-11-06 12:42:35.566784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.213 [2024-11-06 12:42:35.567434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.213 [2024-11-06 12:42:35.567461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.213 [2024-11-06 12:42:35.567576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.213 [2024-11-06 12:42:35.567608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.213 pt2 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.213 [2024-11-06 12:42:35.574571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.213 [2024-11-06 12:42:35.574656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.213 [2024-11-06 12:42:35.574690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:47.213 [2024-11-06 12:42:35.574722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.213 [2024-11-06 12:42:35.575161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.213 [2024-11-06 12:42:35.575220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.213 [2024-11-06 12:42:35.575301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.213 [2024-11-06 12:42:35.575357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.213 pt3 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.213 [2024-11-06 12:42:35.582534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.213 [2024-11-06 12:42:35.582795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.213 [2024-11-06 12:42:35.582835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:47.213 [2024-11-06 12:42:35.582850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.213 [2024-11-06 12:42:35.583368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.213 [2024-11-06 12:42:35.583404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.213 [2024-11-06 12:42:35.583487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.213 [2024-11-06 12:42:35.583516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.213 [2024-11-06 12:42:35.583681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.213 [2024-11-06 12:42:35.583697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.213 [2024-11-06 12:42:35.584003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:47.213 [2024-11-06 12:42:35.584200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.213 [2024-11-06 12:42:35.584256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.213 [2024-11-06 12:42:35.584411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.213 pt4 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.213 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.214 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.214 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.214 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.214 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.214 "name": "raid_bdev1", 00:11:47.214 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:47.214 "strip_size_kb": 64, 00:11:47.214 "state": "online", 00:11:47.214 "raid_level": "concat", 00:11:47.214 "superblock": true, 00:11:47.214 "num_base_bdevs": 4, 00:11:47.214 "num_base_bdevs_discovered": 4, 00:11:47.214 "num_base_bdevs_operational": 4, 00:11:47.214 "base_bdevs_list": [ 00:11:47.214 { 00:11:47.214 "name": "pt1", 00:11:47.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.214 "is_configured": true, 00:11:47.214 "data_offset": 2048, 00:11:47.214 "data_size": 63488 00:11:47.214 }, 00:11:47.214 { 00:11:47.214 "name": "pt2", 00:11:47.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.214 "is_configured": true, 00:11:47.214 "data_offset": 2048, 00:11:47.214 "data_size": 63488 00:11:47.214 }, 00:11:47.214 { 00:11:47.214 "name": "pt3", 00:11:47.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.214 "is_configured": true, 00:11:47.214 "data_offset": 2048, 00:11:47.214 "data_size": 63488 00:11:47.214 }, 00:11:47.214 { 00:11:47.214 "name": "pt4", 00:11:47.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.214 "is_configured": true, 00:11:47.214 "data_offset": 2048, 00:11:47.214 "data_size": 63488 00:11:47.214 } 00:11:47.214 ] 00:11:47.214 }' 00:11:47.214 12:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.214 12:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.472 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.472 [2024-11-06 12:42:36.123146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.731 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.731 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.731 "name": "raid_bdev1", 00:11:47.731 "aliases": [ 00:11:47.731 "73acef3d-6e6b-468f-b670-4b784a55adc6" 00:11:47.731 ], 00:11:47.731 "product_name": "Raid Volume", 00:11:47.731 "block_size": 512, 00:11:47.731 "num_blocks": 253952, 00:11:47.731 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:47.731 "assigned_rate_limits": { 00:11:47.732 "rw_ios_per_sec": 0, 00:11:47.732 "rw_mbytes_per_sec": 0, 00:11:47.732 "r_mbytes_per_sec": 0, 00:11:47.732 "w_mbytes_per_sec": 0 00:11:47.732 }, 00:11:47.732 "claimed": false, 00:11:47.732 "zoned": false, 00:11:47.732 "supported_io_types": { 00:11:47.732 "read": true, 00:11:47.732 "write": true, 00:11:47.732 "unmap": true, 00:11:47.732 "flush": true, 00:11:47.732 "reset": true, 00:11:47.732 "nvme_admin": false, 00:11:47.732 "nvme_io": false, 00:11:47.732 "nvme_io_md": false, 00:11:47.732 "write_zeroes": true, 00:11:47.732 "zcopy": false, 00:11:47.732 "get_zone_info": false, 00:11:47.732 "zone_management": false, 00:11:47.732 "zone_append": false, 00:11:47.732 "compare": false, 00:11:47.732 "compare_and_write": false, 00:11:47.732 "abort": false, 00:11:47.732 "seek_hole": false, 00:11:47.732 "seek_data": false, 00:11:47.732 "copy": false, 00:11:47.732 "nvme_iov_md": false 00:11:47.732 }, 00:11:47.732 "memory_domains": [ 00:11:47.732 { 00:11:47.732 "dma_device_id": "system", 00:11:47.732 "dma_device_type": 1 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.732 "dma_device_type": 2 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "system", 00:11:47.732 "dma_device_type": 1 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.732 "dma_device_type": 2 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "system", 00:11:47.732 "dma_device_type": 1 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.732 "dma_device_type": 2 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "system", 00:11:47.732 "dma_device_type": 1 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.732 "dma_device_type": 2 00:11:47.732 } 00:11:47.732 ], 00:11:47.732 "driver_specific": { 00:11:47.732 "raid": { 00:11:47.732 "uuid": "73acef3d-6e6b-468f-b670-4b784a55adc6", 00:11:47.732 "strip_size_kb": 64, 00:11:47.732 "state": "online", 00:11:47.732 "raid_level": "concat", 00:11:47.732 "superblock": true, 00:11:47.732 "num_base_bdevs": 4, 00:11:47.732 "num_base_bdevs_discovered": 4, 00:11:47.732 "num_base_bdevs_operational": 4, 00:11:47.732 "base_bdevs_list": [ 00:11:47.732 { 00:11:47.732 "name": "pt1", 00:11:47.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.732 "is_configured": true, 00:11:47.732 "data_offset": 2048, 00:11:47.732 "data_size": 63488 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "name": "pt2", 00:11:47.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.732 "is_configured": true, 00:11:47.732 "data_offset": 2048, 00:11:47.732 "data_size": 63488 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "name": "pt3", 00:11:47.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.732 "is_configured": true, 00:11:47.732 "data_offset": 2048, 00:11:47.732 "data_size": 63488 00:11:47.732 }, 00:11:47.732 { 00:11:47.732 "name": "pt4", 00:11:47.732 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.732 "is_configured": true, 00:11:47.732 "data_offset": 2048, 00:11:47.732 "data_size": 63488 00:11:47.732 } 00:11:47.732 ] 00:11:47.732 } 00:11:47.732 } 00:11:47.732 }' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.732 pt2 00:11:47.732 pt3 00:11:47.732 pt4' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.732 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.991 [2024-11-06 12:42:36.499275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73acef3d-6e6b-468f-b670-4b784a55adc6 '!=' 73acef3d-6e6b-468f-b670-4b784a55adc6 ']' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72786 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72786 ']' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72786 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72786 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.991 killing process with pid 72786 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72786' 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72786 00:11:47.991 [2024-11-06 12:42:36.579734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.991 12:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72786 00:11:47.991 [2024-11-06 12:42:36.579838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.991 [2024-11-06 12:42:36.579937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.991 [2024-11-06 12:42:36.579954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:48.558 [2024-11-06 12:42:36.931085] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.493 ************************************ 00:11:49.493 END TEST raid_superblock_test 00:11:49.493 ************************************ 00:11:49.493 12:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:49.493 00:11:49.493 real 0m5.908s 00:11:49.493 user 0m8.836s 00:11:49.493 sys 0m0.907s 00:11:49.493 12:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.493 12:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 12:42:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:49.493 12:42:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:49.493 12:42:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.493 12:42:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 ************************************ 00:11:49.493 START TEST raid_read_error_test 00:11:49.493 ************************************ 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EJsl8z2oai 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73052 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73052 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73052 ']' 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.493 12:42:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 [2024-11-06 12:42:38.138675] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:49.493 [2024-11-06 12:42:38.138853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73052 ] 00:11:49.751 [2024-11-06 12:42:38.314861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.010 [2024-11-06 12:42:38.438310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.010 [2024-11-06 12:42:38.641754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.010 [2024-11-06 12:42:38.641829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 BaseBdev1_malloc 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 true 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 [2024-11-06 12:42:39.167817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.576 [2024-11-06 12:42:39.168070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.576 [2024-11-06 12:42:39.168156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.576 [2024-11-06 12:42:39.168329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.576 [2024-11-06 12:42:39.171281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.576 [2024-11-06 12:42:39.171516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.576 BaseBdev1 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 BaseBdev2_malloc 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 true 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.576 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.576 [2024-11-06 12:42:39.227977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.576 [2024-11-06 12:42:39.228110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.576 [2024-11-06 12:42:39.228136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.576 [2024-11-06 12:42:39.228152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.576 [2024-11-06 12:42:39.230950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.576 [2024-11-06 12:42:39.231000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.834 BaseBdev2 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 BaseBdev3_malloc 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 true 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 [2024-11-06 12:42:39.302803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:50.834 [2024-11-06 12:42:39.303059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.834 [2024-11-06 12:42:39.303145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:50.834 [2024-11-06 12:42:39.303354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.834 [2024-11-06 12:42:39.306073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.834 [2024-11-06 12:42:39.306124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:50.834 BaseBdev3 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 BaseBdev4_malloc 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 true 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.834 [2024-11-06 12:42:39.358169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:50.834 [2024-11-06 12:42:39.358262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.834 [2024-11-06 12:42:39.358288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.834 [2024-11-06 12:42:39.358305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.834 [2024-11-06 12:42:39.361064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.834 [2024-11-06 12:42:39.361117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:50.834 BaseBdev4 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.834 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.835 [2024-11-06 12:42:39.366291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.835 [2024-11-06 12:42:39.368732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.835 [2024-11-06 12:42:39.368834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.835 [2024-11-06 12:42:39.368926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.835 [2024-11-06 12:42:39.369219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:50.835 [2024-11-06 12:42:39.369244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.835 [2024-11-06 12:42:39.369536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:50.835 [2024-11-06 12:42:39.369741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:50.835 [2024-11-06 12:42:39.369759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:50.835 [2024-11-06 12:42:39.369934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.835 "name": "raid_bdev1", 00:11:50.835 "uuid": "de4fd10b-d78f-4512-b3f8-58b4094c0876", 00:11:50.835 "strip_size_kb": 64, 00:11:50.835 "state": "online", 00:11:50.835 "raid_level": "concat", 00:11:50.835 "superblock": true, 00:11:50.835 "num_base_bdevs": 4, 00:11:50.835 "num_base_bdevs_discovered": 4, 00:11:50.835 "num_base_bdevs_operational": 4, 00:11:50.835 "base_bdevs_list": [ 00:11:50.835 { 00:11:50.835 "name": "BaseBdev1", 00:11:50.835 "uuid": "f6c143b7-7f1b-5950-ac73-51a3da49f28e", 00:11:50.835 "is_configured": true, 00:11:50.835 "data_offset": 2048, 00:11:50.835 "data_size": 63488 00:11:50.835 }, 00:11:50.835 { 00:11:50.835 "name": "BaseBdev2", 00:11:50.835 "uuid": "05b6e963-8e74-5d55-92fa-83edd568d318", 00:11:50.835 "is_configured": true, 00:11:50.835 "data_offset": 2048, 00:11:50.835 "data_size": 63488 00:11:50.835 }, 00:11:50.835 { 00:11:50.835 "name": "BaseBdev3", 00:11:50.835 "uuid": "ae16fa74-3fdd-55ff-968b-619d783c45d5", 00:11:50.835 "is_configured": true, 00:11:50.835 "data_offset": 2048, 00:11:50.835 "data_size": 63488 00:11:50.835 }, 00:11:50.835 { 00:11:50.835 "name": "BaseBdev4", 00:11:50.835 "uuid": "848edde6-e68c-5444-8b89-08701253d986", 00:11:50.835 "is_configured": true, 00:11:50.835 "data_offset": 2048, 00:11:50.835 "data_size": 63488 00:11:50.835 } 00:11:50.835 ] 00:11:50.835 }' 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.835 12:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.400 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.400 12:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.400 [2024-11-06 12:42:39.995869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.336 "name": "raid_bdev1", 00:11:52.336 "uuid": "de4fd10b-d78f-4512-b3f8-58b4094c0876", 00:11:52.336 "strip_size_kb": 64, 00:11:52.336 "state": "online", 00:11:52.336 "raid_level": "concat", 00:11:52.336 "superblock": true, 00:11:52.336 "num_base_bdevs": 4, 00:11:52.336 "num_base_bdevs_discovered": 4, 00:11:52.336 "num_base_bdevs_operational": 4, 00:11:52.336 "base_bdevs_list": [ 00:11:52.336 { 00:11:52.336 "name": "BaseBdev1", 00:11:52.336 "uuid": "f6c143b7-7f1b-5950-ac73-51a3da49f28e", 00:11:52.336 "is_configured": true, 00:11:52.336 "data_offset": 2048, 00:11:52.336 "data_size": 63488 00:11:52.336 }, 00:11:52.336 { 00:11:52.336 "name": "BaseBdev2", 00:11:52.336 "uuid": "05b6e963-8e74-5d55-92fa-83edd568d318", 00:11:52.336 "is_configured": true, 00:11:52.336 "data_offset": 2048, 00:11:52.336 "data_size": 63488 00:11:52.336 }, 00:11:52.336 { 00:11:52.336 "name": "BaseBdev3", 00:11:52.336 "uuid": "ae16fa74-3fdd-55ff-968b-619d783c45d5", 00:11:52.336 "is_configured": true, 00:11:52.336 "data_offset": 2048, 00:11:52.336 "data_size": 63488 00:11:52.336 }, 00:11:52.336 { 00:11:52.336 "name": "BaseBdev4", 00:11:52.336 "uuid": "848edde6-e68c-5444-8b89-08701253d986", 00:11:52.336 "is_configured": true, 00:11:52.336 "data_offset": 2048, 00:11:52.336 "data_size": 63488 00:11:52.336 } 00:11:52.336 ] 00:11:52.336 }' 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.336 12:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.903 12:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.903 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.903 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.904 [2024-11-06 12:42:41.459990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.904 [2024-11-06 12:42:41.460033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.904 [2024-11-06 12:42:41.463454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.904 [2024-11-06 12:42:41.463534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.904 [2024-11-06 12:42:41.463596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.904 [2024-11-06 12:42:41.463619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.904 { 00:11:52.904 "results": [ 00:11:52.904 { 00:11:52.904 "job": "raid_bdev1", 00:11:52.904 "core_mask": "0x1", 00:11:52.904 "workload": "randrw", 00:11:52.904 "percentage": 50, 00:11:52.904 "status": "finished", 00:11:52.904 "queue_depth": 1, 00:11:52.904 "io_size": 131072, 00:11:52.904 "runtime": 1.461667, 00:11:52.904 "iops": 10504.44458279485, 00:11:52.904 "mibps": 1313.0555728493562, 00:11:52.904 "io_failed": 1, 00:11:52.904 "io_timeout": 0, 00:11:52.904 "avg_latency_us": 132.73297249933395, 00:11:52.904 "min_latency_us": 39.56363636363636, 00:11:52.904 "max_latency_us": 1779.898181818182 00:11:52.904 } 00:11:52.904 ], 00:11:52.904 "core_count": 1 00:11:52.904 } 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73052 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73052 ']' 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73052 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73052 00:11:52.904 killing process with pid 73052 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73052' 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73052 00:11:52.904 [2024-11-06 12:42:41.497861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.904 12:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73052 00:11:53.162 [2024-11-06 12:42:41.791433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.537 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EJsl8z2oai 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.538 ************************************ 00:11:54.538 END TEST raid_read_error_test 00:11:54.538 ************************************ 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:11:54.538 00:11:54.538 real 0m4.867s 00:11:54.538 user 0m5.982s 00:11:54.538 sys 0m0.608s 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.538 12:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.538 12:42:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:54.538 12:42:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:54.538 12:42:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.538 12:42:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.538 ************************************ 00:11:54.538 START TEST raid_write_error_test 00:11:54.538 ************************************ 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ADp7nfxx4L 00:11:54.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73198 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73198 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73198 ']' 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:54.538 12:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.538 [2024-11-06 12:42:43.071867] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:54.538 [2024-11-06 12:42:43.072063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73198 ] 00:11:54.796 [2024-11-06 12:42:43.264945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.796 [2024-11-06 12:42:43.410227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.069 [2024-11-06 12:42:43.609022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.069 [2024-11-06 12:42:43.609104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.649 BaseBdev1_malloc 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.649 true 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.649 [2024-11-06 12:42:44.153963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:55.649 [2024-11-06 12:42:44.154234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.649 [2024-11-06 12:42:44.154309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:55.649 [2024-11-06 12:42:44.154444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.649 [2024-11-06 12:42:44.157489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.649 [2024-11-06 12:42:44.157655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:55.649 BaseBdev1 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.649 BaseBdev2_malloc 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.649 true 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.649 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 [2024-11-06 12:42:44.222442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:55.650 [2024-11-06 12:42:44.222523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.650 [2024-11-06 12:42:44.222549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:55.650 [2024-11-06 12:42:44.222566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.650 [2024-11-06 12:42:44.225325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.650 [2024-11-06 12:42:44.225377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:55.650 BaseBdev2 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 BaseBdev3_malloc 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 true 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.650 [2024-11-06 12:42:44.293678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:55.650 [2024-11-06 12:42:44.293755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.650 [2024-11-06 12:42:44.293783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:55.650 [2024-11-06 12:42:44.293800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.650 [2024-11-06 12:42:44.296703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.650 [2024-11-06 12:42:44.296756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:55.650 BaseBdev3 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.650 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.908 BaseBdev4_malloc 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.908 true 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.908 [2024-11-06 12:42:44.355139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:55.908 [2024-11-06 12:42:44.355399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.908 [2024-11-06 12:42:44.355469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:55.908 [2024-11-06 12:42:44.355593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.908 [2024-11-06 12:42:44.358444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.908 [2024-11-06 12:42:44.358609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:55.908 BaseBdev4 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.908 [2024-11-06 12:42:44.367393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.908 [2024-11-06 12:42:44.369967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.908 [2024-11-06 12:42:44.370074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.908 [2024-11-06 12:42:44.370170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.908 [2024-11-06 12:42:44.370496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:55.908 [2024-11-06 12:42:44.370523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.908 [2024-11-06 12:42:44.370834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:55.908 [2024-11-06 12:42:44.371062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:55.908 [2024-11-06 12:42:44.371081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:55.908 [2024-11-06 12:42:44.371353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.908 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.909 "name": "raid_bdev1", 00:11:55.909 "uuid": "729235dc-aff3-47f6-b366-a4a83301aa78", 00:11:55.909 "strip_size_kb": 64, 00:11:55.909 "state": "online", 00:11:55.909 "raid_level": "concat", 00:11:55.909 "superblock": true, 00:11:55.909 "num_base_bdevs": 4, 00:11:55.909 "num_base_bdevs_discovered": 4, 00:11:55.909 "num_base_bdevs_operational": 4, 00:11:55.909 "base_bdevs_list": [ 00:11:55.909 { 00:11:55.909 "name": "BaseBdev1", 00:11:55.909 "uuid": "06905035-e5bb-51f5-b1fc-c512b0304efa", 00:11:55.909 "is_configured": true, 00:11:55.909 "data_offset": 2048, 00:11:55.909 "data_size": 63488 00:11:55.909 }, 00:11:55.909 { 00:11:55.909 "name": "BaseBdev2", 00:11:55.909 "uuid": "a8d55447-4809-5aa3-a2b6-1a69c7a24aa1", 00:11:55.909 "is_configured": true, 00:11:55.909 "data_offset": 2048, 00:11:55.909 "data_size": 63488 00:11:55.909 }, 00:11:55.909 { 00:11:55.909 "name": "BaseBdev3", 00:11:55.909 "uuid": "8a336783-fe42-5850-9b70-4444c862d042", 00:11:55.909 "is_configured": true, 00:11:55.909 "data_offset": 2048, 00:11:55.909 "data_size": 63488 00:11:55.909 }, 00:11:55.909 { 00:11:55.909 "name": "BaseBdev4", 00:11:55.909 "uuid": "4f58cff4-b84e-5e5f-82e7-5cb5ff6e7ce2", 00:11:55.909 "is_configured": true, 00:11:55.909 "data_offset": 2048, 00:11:55.909 "data_size": 63488 00:11:55.909 } 00:11:55.909 ] 00:11:55.909 }' 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.909 12:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.476 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.476 12:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.476 [2024-11-06 12:42:45.013066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.436 "name": "raid_bdev1", 00:11:57.436 "uuid": "729235dc-aff3-47f6-b366-a4a83301aa78", 00:11:57.436 "strip_size_kb": 64, 00:11:57.436 "state": "online", 00:11:57.436 "raid_level": "concat", 00:11:57.436 "superblock": true, 00:11:57.436 "num_base_bdevs": 4, 00:11:57.436 "num_base_bdevs_discovered": 4, 00:11:57.436 "num_base_bdevs_operational": 4, 00:11:57.436 "base_bdevs_list": [ 00:11:57.436 { 00:11:57.436 "name": "BaseBdev1", 00:11:57.436 "uuid": "06905035-e5bb-51f5-b1fc-c512b0304efa", 00:11:57.436 "is_configured": true, 00:11:57.436 "data_offset": 2048, 00:11:57.436 "data_size": 63488 00:11:57.436 }, 00:11:57.436 { 00:11:57.436 "name": "BaseBdev2", 00:11:57.436 "uuid": "a8d55447-4809-5aa3-a2b6-1a69c7a24aa1", 00:11:57.436 "is_configured": true, 00:11:57.436 "data_offset": 2048, 00:11:57.436 "data_size": 63488 00:11:57.436 }, 00:11:57.436 { 00:11:57.436 "name": "BaseBdev3", 00:11:57.436 "uuid": "8a336783-fe42-5850-9b70-4444c862d042", 00:11:57.436 "is_configured": true, 00:11:57.436 "data_offset": 2048, 00:11:57.436 "data_size": 63488 00:11:57.436 }, 00:11:57.436 { 00:11:57.436 "name": "BaseBdev4", 00:11:57.436 "uuid": "4f58cff4-b84e-5e5f-82e7-5cb5ff6e7ce2", 00:11:57.436 "is_configured": true, 00:11:57.436 "data_offset": 2048, 00:11:57.436 "data_size": 63488 00:11:57.436 } 00:11:57.436 ] 00:11:57.436 }' 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.436 12:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.002 [2024-11-06 12:42:46.436945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.002 [2024-11-06 12:42:46.437215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.002 [2024-11-06 12:42:46.440711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.002 [2024-11-06 12:42:46.440791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.002 [2024-11-06 12:42:46.440850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.002 [2024-11-06 12:42:46.440868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:58.002 { 00:11:58.002 "results": [ 00:11:58.002 { 00:11:58.002 "job": "raid_bdev1", 00:11:58.002 "core_mask": "0x1", 00:11:58.002 "workload": "randrw", 00:11:58.002 "percentage": 50, 00:11:58.002 "status": "finished", 00:11:58.002 "queue_depth": 1, 00:11:58.002 "io_size": 131072, 00:11:58.002 "runtime": 1.421615, 00:11:58.002 "iops": 10810.94389127858, 00:11:58.002 "mibps": 1351.3679864098226, 00:11:58.002 "io_failed": 1, 00:11:58.002 "io_timeout": 0, 00:11:58.002 "avg_latency_us": 129.0373398000828, 00:11:58.002 "min_latency_us": 39.09818181818182, 00:11:58.002 "max_latency_us": 1891.6072727272726 00:11:58.002 } 00:11:58.002 ], 00:11:58.002 "core_count": 1 00:11:58.002 } 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73198 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73198 ']' 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73198 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73198 00:11:58.002 killing process with pid 73198 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73198' 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73198 00:11:58.002 [2024-11-06 12:42:46.475063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.002 12:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73198 00:11:58.261 [2024-11-06 12:42:46.757336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ADp7nfxx4L 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:59.196 ************************************ 00:11:59.196 END TEST raid_write_error_test 00:11:59.196 ************************************ 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:59.196 00:11:59.196 real 0m4.877s 00:11:59.196 user 0m6.043s 00:11:59.196 sys 0m0.629s 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:59.196 12:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.454 12:42:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:59.454 12:42:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:59.454 12:42:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:59.454 12:42:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.454 12:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.454 ************************************ 00:11:59.454 START TEST raid_state_function_test 00:11:59.454 ************************************ 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:59.454 Process raid pid: 73341 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73341 00:11:59.454 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73341' 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73341 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73341 ']' 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.455 12:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.455 [2024-11-06 12:42:47.977465] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:11:59.455 [2024-11-06 12:42:47.977794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.713 [2024-11-06 12:42:48.149469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.713 [2024-11-06 12:42:48.280163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.972 [2024-11-06 12:42:48.485184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.972 [2024-11-06 12:42:48.485462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.539 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.540 [2024-11-06 12:42:49.057038] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.540 [2024-11-06 12:42:49.057096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.540 [2024-11-06 12:42:49.057113] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.540 [2024-11-06 12:42:49.057130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.540 [2024-11-06 12:42:49.057140] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.540 [2024-11-06 12:42:49.057154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.540 [2024-11-06 12:42:49.057164] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.540 [2024-11-06 12:42:49.057178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.540 "name": "Existed_Raid", 00:12:00.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.540 "strip_size_kb": 0, 00:12:00.540 "state": "configuring", 00:12:00.540 "raid_level": "raid1", 00:12:00.540 "superblock": false, 00:12:00.540 "num_base_bdevs": 4, 00:12:00.540 "num_base_bdevs_discovered": 0, 00:12:00.540 "num_base_bdevs_operational": 4, 00:12:00.540 "base_bdevs_list": [ 00:12:00.540 { 00:12:00.540 "name": "BaseBdev1", 00:12:00.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.540 "is_configured": false, 00:12:00.540 "data_offset": 0, 00:12:00.540 "data_size": 0 00:12:00.540 }, 00:12:00.540 { 00:12:00.540 "name": "BaseBdev2", 00:12:00.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.540 "is_configured": false, 00:12:00.540 "data_offset": 0, 00:12:00.540 "data_size": 0 00:12:00.540 }, 00:12:00.540 { 00:12:00.540 "name": "BaseBdev3", 00:12:00.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.540 "is_configured": false, 00:12:00.540 "data_offset": 0, 00:12:00.540 "data_size": 0 00:12:00.540 }, 00:12:00.540 { 00:12:00.540 "name": "BaseBdev4", 00:12:00.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.540 "is_configured": false, 00:12:00.540 "data_offset": 0, 00:12:00.540 "data_size": 0 00:12:00.540 } 00:12:00.540 ] 00:12:00.540 }' 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.540 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 [2024-11-06 12:42:49.581169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.109 [2024-11-06 12:42:49.581250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 [2024-11-06 12:42:49.589146] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.109 [2024-11-06 12:42:49.589208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.109 [2024-11-06 12:42:49.589225] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.109 [2024-11-06 12:42:49.589241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.109 [2024-11-06 12:42:49.589251] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.109 [2024-11-06 12:42:49.589265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.109 [2024-11-06 12:42:49.589275] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.109 [2024-11-06 12:42:49.589289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 [2024-11-06 12:42:49.633845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.109 BaseBdev1 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 [ 00:12:01.109 { 00:12:01.109 "name": "BaseBdev1", 00:12:01.109 "aliases": [ 00:12:01.109 "75614e92-7094-4a05-b9f9-ecc0bef7983d" 00:12:01.109 ], 00:12:01.109 "product_name": "Malloc disk", 00:12:01.109 "block_size": 512, 00:12:01.109 "num_blocks": 65536, 00:12:01.109 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:01.109 "assigned_rate_limits": { 00:12:01.109 "rw_ios_per_sec": 0, 00:12:01.109 "rw_mbytes_per_sec": 0, 00:12:01.109 "r_mbytes_per_sec": 0, 00:12:01.109 "w_mbytes_per_sec": 0 00:12:01.109 }, 00:12:01.109 "claimed": true, 00:12:01.109 "claim_type": "exclusive_write", 00:12:01.109 "zoned": false, 00:12:01.109 "supported_io_types": { 00:12:01.109 "read": true, 00:12:01.109 "write": true, 00:12:01.109 "unmap": true, 00:12:01.109 "flush": true, 00:12:01.109 "reset": true, 00:12:01.109 "nvme_admin": false, 00:12:01.109 "nvme_io": false, 00:12:01.109 "nvme_io_md": false, 00:12:01.109 "write_zeroes": true, 00:12:01.109 "zcopy": true, 00:12:01.109 "get_zone_info": false, 00:12:01.109 "zone_management": false, 00:12:01.109 "zone_append": false, 00:12:01.109 "compare": false, 00:12:01.109 "compare_and_write": false, 00:12:01.109 "abort": true, 00:12:01.109 "seek_hole": false, 00:12:01.109 "seek_data": false, 00:12:01.109 "copy": true, 00:12:01.109 "nvme_iov_md": false 00:12:01.109 }, 00:12:01.109 "memory_domains": [ 00:12:01.109 { 00:12:01.109 "dma_device_id": "system", 00:12:01.109 "dma_device_type": 1 00:12:01.109 }, 00:12:01.109 { 00:12:01.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.109 "dma_device_type": 2 00:12:01.109 } 00:12:01.109 ], 00:12:01.109 "driver_specific": {} 00:12:01.109 } 00:12:01.109 ] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.109 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.110 "name": "Existed_Raid", 00:12:01.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.110 "strip_size_kb": 0, 00:12:01.110 "state": "configuring", 00:12:01.110 "raid_level": "raid1", 00:12:01.110 "superblock": false, 00:12:01.110 "num_base_bdevs": 4, 00:12:01.110 "num_base_bdevs_discovered": 1, 00:12:01.110 "num_base_bdevs_operational": 4, 00:12:01.110 "base_bdevs_list": [ 00:12:01.110 { 00:12:01.110 "name": "BaseBdev1", 00:12:01.110 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:01.110 "is_configured": true, 00:12:01.110 "data_offset": 0, 00:12:01.110 "data_size": 65536 00:12:01.110 }, 00:12:01.110 { 00:12:01.110 "name": "BaseBdev2", 00:12:01.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.110 "is_configured": false, 00:12:01.110 "data_offset": 0, 00:12:01.110 "data_size": 0 00:12:01.110 }, 00:12:01.110 { 00:12:01.110 "name": "BaseBdev3", 00:12:01.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.110 "is_configured": false, 00:12:01.110 "data_offset": 0, 00:12:01.110 "data_size": 0 00:12:01.110 }, 00:12:01.110 { 00:12:01.110 "name": "BaseBdev4", 00:12:01.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.110 "is_configured": false, 00:12:01.110 "data_offset": 0, 00:12:01.110 "data_size": 0 00:12:01.110 } 00:12:01.110 ] 00:12:01.110 }' 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.110 12:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.677 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.677 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.677 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.677 [2024-11-06 12:42:50.186046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.678 [2024-11-06 12:42:50.186109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.678 [2024-11-06 12:42:50.194075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.678 [2024-11-06 12:42:50.196617] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.678 [2024-11-06 12:42:50.196666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.678 [2024-11-06 12:42:50.196699] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.678 [2024-11-06 12:42:50.196716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.678 [2024-11-06 12:42:50.196726] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.678 [2024-11-06 12:42:50.196740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.678 "name": "Existed_Raid", 00:12:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.678 "strip_size_kb": 0, 00:12:01.678 "state": "configuring", 00:12:01.678 "raid_level": "raid1", 00:12:01.678 "superblock": false, 00:12:01.678 "num_base_bdevs": 4, 00:12:01.678 "num_base_bdevs_discovered": 1, 00:12:01.678 "num_base_bdevs_operational": 4, 00:12:01.678 "base_bdevs_list": [ 00:12:01.678 { 00:12:01.678 "name": "BaseBdev1", 00:12:01.678 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:01.678 "is_configured": true, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 65536 00:12:01.678 }, 00:12:01.678 { 00:12:01.678 "name": "BaseBdev2", 00:12:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.678 "is_configured": false, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 0 00:12:01.678 }, 00:12:01.678 { 00:12:01.678 "name": "BaseBdev3", 00:12:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.678 "is_configured": false, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 0 00:12:01.678 }, 00:12:01.678 { 00:12:01.678 "name": "BaseBdev4", 00:12:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.678 "is_configured": false, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 0 00:12:01.678 } 00:12:01.678 ] 00:12:01.678 }' 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.678 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.243 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 [2024-11-06 12:42:50.744832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.244 BaseBdev2 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 [ 00:12:02.244 { 00:12:02.244 "name": "BaseBdev2", 00:12:02.244 "aliases": [ 00:12:02.244 "15e33f51-cc44-40d4-b1d5-c2b4d2c29573" 00:12:02.244 ], 00:12:02.244 "product_name": "Malloc disk", 00:12:02.244 "block_size": 512, 00:12:02.244 "num_blocks": 65536, 00:12:02.244 "uuid": "15e33f51-cc44-40d4-b1d5-c2b4d2c29573", 00:12:02.244 "assigned_rate_limits": { 00:12:02.244 "rw_ios_per_sec": 0, 00:12:02.244 "rw_mbytes_per_sec": 0, 00:12:02.244 "r_mbytes_per_sec": 0, 00:12:02.244 "w_mbytes_per_sec": 0 00:12:02.244 }, 00:12:02.244 "claimed": true, 00:12:02.244 "claim_type": "exclusive_write", 00:12:02.244 "zoned": false, 00:12:02.244 "supported_io_types": { 00:12:02.244 "read": true, 00:12:02.244 "write": true, 00:12:02.244 "unmap": true, 00:12:02.244 "flush": true, 00:12:02.244 "reset": true, 00:12:02.244 "nvme_admin": false, 00:12:02.244 "nvme_io": false, 00:12:02.244 "nvme_io_md": false, 00:12:02.244 "write_zeroes": true, 00:12:02.244 "zcopy": true, 00:12:02.244 "get_zone_info": false, 00:12:02.244 "zone_management": false, 00:12:02.244 "zone_append": false, 00:12:02.244 "compare": false, 00:12:02.244 "compare_and_write": false, 00:12:02.244 "abort": true, 00:12:02.244 "seek_hole": false, 00:12:02.244 "seek_data": false, 00:12:02.244 "copy": true, 00:12:02.244 "nvme_iov_md": false 00:12:02.244 }, 00:12:02.244 "memory_domains": [ 00:12:02.244 { 00:12:02.244 "dma_device_id": "system", 00:12:02.244 "dma_device_type": 1 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.244 "dma_device_type": 2 00:12:02.244 } 00:12:02.244 ], 00:12:02.244 "driver_specific": {} 00:12:02.244 } 00:12:02.244 ] 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.244 "name": "Existed_Raid", 00:12:02.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.244 "strip_size_kb": 0, 00:12:02.244 "state": "configuring", 00:12:02.244 "raid_level": "raid1", 00:12:02.244 "superblock": false, 00:12:02.244 "num_base_bdevs": 4, 00:12:02.244 "num_base_bdevs_discovered": 2, 00:12:02.244 "num_base_bdevs_operational": 4, 00:12:02.244 "base_bdevs_list": [ 00:12:02.244 { 00:12:02.244 "name": "BaseBdev1", 00:12:02.244 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:02.244 "is_configured": true, 00:12:02.244 "data_offset": 0, 00:12:02.244 "data_size": 65536 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "name": "BaseBdev2", 00:12:02.244 "uuid": "15e33f51-cc44-40d4-b1d5-c2b4d2c29573", 00:12:02.244 "is_configured": true, 00:12:02.244 "data_offset": 0, 00:12:02.244 "data_size": 65536 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "name": "BaseBdev3", 00:12:02.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.244 "is_configured": false, 00:12:02.244 "data_offset": 0, 00:12:02.244 "data_size": 0 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "name": "BaseBdev4", 00:12:02.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.244 "is_configured": false, 00:12:02.244 "data_offset": 0, 00:12:02.244 "data_size": 0 00:12:02.244 } 00:12:02.244 ] 00:12:02.244 }' 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.244 12:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.812 [2024-11-06 12:42:51.358665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.812 BaseBdev3 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.812 [ 00:12:02.812 { 00:12:02.812 "name": "BaseBdev3", 00:12:02.812 "aliases": [ 00:12:02.812 "0bfcd92b-5249-4588-bcce-4d581dca47a9" 00:12:02.812 ], 00:12:02.812 "product_name": "Malloc disk", 00:12:02.812 "block_size": 512, 00:12:02.812 "num_blocks": 65536, 00:12:02.812 "uuid": "0bfcd92b-5249-4588-bcce-4d581dca47a9", 00:12:02.812 "assigned_rate_limits": { 00:12:02.812 "rw_ios_per_sec": 0, 00:12:02.812 "rw_mbytes_per_sec": 0, 00:12:02.812 "r_mbytes_per_sec": 0, 00:12:02.812 "w_mbytes_per_sec": 0 00:12:02.812 }, 00:12:02.812 "claimed": true, 00:12:02.812 "claim_type": "exclusive_write", 00:12:02.812 "zoned": false, 00:12:02.812 "supported_io_types": { 00:12:02.812 "read": true, 00:12:02.812 "write": true, 00:12:02.812 "unmap": true, 00:12:02.812 "flush": true, 00:12:02.812 "reset": true, 00:12:02.812 "nvme_admin": false, 00:12:02.812 "nvme_io": false, 00:12:02.812 "nvme_io_md": false, 00:12:02.812 "write_zeroes": true, 00:12:02.812 "zcopy": true, 00:12:02.812 "get_zone_info": false, 00:12:02.812 "zone_management": false, 00:12:02.812 "zone_append": false, 00:12:02.812 "compare": false, 00:12:02.812 "compare_and_write": false, 00:12:02.812 "abort": true, 00:12:02.812 "seek_hole": false, 00:12:02.812 "seek_data": false, 00:12:02.812 "copy": true, 00:12:02.812 "nvme_iov_md": false 00:12:02.812 }, 00:12:02.812 "memory_domains": [ 00:12:02.812 { 00:12:02.812 "dma_device_id": "system", 00:12:02.812 "dma_device_type": 1 00:12:02.812 }, 00:12:02.812 { 00:12:02.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.812 "dma_device_type": 2 00:12:02.812 } 00:12:02.812 ], 00:12:02.812 "driver_specific": {} 00:12:02.812 } 00:12:02.812 ] 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.812 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.813 "name": "Existed_Raid", 00:12:02.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.813 "strip_size_kb": 0, 00:12:02.813 "state": "configuring", 00:12:02.813 "raid_level": "raid1", 00:12:02.813 "superblock": false, 00:12:02.813 "num_base_bdevs": 4, 00:12:02.813 "num_base_bdevs_discovered": 3, 00:12:02.813 "num_base_bdevs_operational": 4, 00:12:02.813 "base_bdevs_list": [ 00:12:02.813 { 00:12:02.813 "name": "BaseBdev1", 00:12:02.813 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:02.813 "is_configured": true, 00:12:02.813 "data_offset": 0, 00:12:02.813 "data_size": 65536 00:12:02.813 }, 00:12:02.813 { 00:12:02.813 "name": "BaseBdev2", 00:12:02.813 "uuid": "15e33f51-cc44-40d4-b1d5-c2b4d2c29573", 00:12:02.813 "is_configured": true, 00:12:02.813 "data_offset": 0, 00:12:02.813 "data_size": 65536 00:12:02.813 }, 00:12:02.813 { 00:12:02.813 "name": "BaseBdev3", 00:12:02.813 "uuid": "0bfcd92b-5249-4588-bcce-4d581dca47a9", 00:12:02.813 "is_configured": true, 00:12:02.813 "data_offset": 0, 00:12:02.813 "data_size": 65536 00:12:02.813 }, 00:12:02.813 { 00:12:02.813 "name": "BaseBdev4", 00:12:02.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.813 "is_configured": false, 00:12:02.813 "data_offset": 0, 00:12:02.813 "data_size": 0 00:12:02.813 } 00:12:02.813 ] 00:12:02.813 }' 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.813 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.381 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:03.381 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.381 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.381 [2024-11-06 12:42:51.949468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.382 [2024-11-06 12:42:51.949576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.382 [2024-11-06 12:42:51.949594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.382 [2024-11-06 12:42:51.949949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.382 [2024-11-06 12:42:51.950199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.382 [2024-11-06 12:42:51.950263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.382 [2024-11-06 12:42:51.950596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.382 BaseBdev4 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.382 [ 00:12:03.382 { 00:12:03.382 "name": "BaseBdev4", 00:12:03.382 "aliases": [ 00:12:03.382 "0fb958a2-e3ea-4f5b-ac23-d5365398227a" 00:12:03.382 ], 00:12:03.382 "product_name": "Malloc disk", 00:12:03.382 "block_size": 512, 00:12:03.382 "num_blocks": 65536, 00:12:03.382 "uuid": "0fb958a2-e3ea-4f5b-ac23-d5365398227a", 00:12:03.382 "assigned_rate_limits": { 00:12:03.382 "rw_ios_per_sec": 0, 00:12:03.382 "rw_mbytes_per_sec": 0, 00:12:03.382 "r_mbytes_per_sec": 0, 00:12:03.382 "w_mbytes_per_sec": 0 00:12:03.382 }, 00:12:03.382 "claimed": true, 00:12:03.382 "claim_type": "exclusive_write", 00:12:03.382 "zoned": false, 00:12:03.382 "supported_io_types": { 00:12:03.382 "read": true, 00:12:03.382 "write": true, 00:12:03.382 "unmap": true, 00:12:03.382 "flush": true, 00:12:03.382 "reset": true, 00:12:03.382 "nvme_admin": false, 00:12:03.382 "nvme_io": false, 00:12:03.382 "nvme_io_md": false, 00:12:03.382 "write_zeroes": true, 00:12:03.382 "zcopy": true, 00:12:03.382 "get_zone_info": false, 00:12:03.382 "zone_management": false, 00:12:03.382 "zone_append": false, 00:12:03.382 "compare": false, 00:12:03.382 "compare_and_write": false, 00:12:03.382 "abort": true, 00:12:03.382 "seek_hole": false, 00:12:03.382 "seek_data": false, 00:12:03.382 "copy": true, 00:12:03.382 "nvme_iov_md": false 00:12:03.382 }, 00:12:03.382 "memory_domains": [ 00:12:03.382 { 00:12:03.382 "dma_device_id": "system", 00:12:03.382 "dma_device_type": 1 00:12:03.382 }, 00:12:03.382 { 00:12:03.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.382 "dma_device_type": 2 00:12:03.382 } 00:12:03.382 ], 00:12:03.382 "driver_specific": {} 00:12:03.382 } 00:12:03.382 ] 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.382 12:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.382 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.640 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.640 "name": "Existed_Raid", 00:12:03.640 "uuid": "19e2ae07-24e9-4859-9b77-7ed4333a6600", 00:12:03.640 "strip_size_kb": 0, 00:12:03.640 "state": "online", 00:12:03.640 "raid_level": "raid1", 00:12:03.640 "superblock": false, 00:12:03.640 "num_base_bdevs": 4, 00:12:03.640 "num_base_bdevs_discovered": 4, 00:12:03.640 "num_base_bdevs_operational": 4, 00:12:03.640 "base_bdevs_list": [ 00:12:03.640 { 00:12:03.640 "name": "BaseBdev1", 00:12:03.640 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:03.640 "is_configured": true, 00:12:03.640 "data_offset": 0, 00:12:03.640 "data_size": 65536 00:12:03.640 }, 00:12:03.640 { 00:12:03.640 "name": "BaseBdev2", 00:12:03.640 "uuid": "15e33f51-cc44-40d4-b1d5-c2b4d2c29573", 00:12:03.640 "is_configured": true, 00:12:03.640 "data_offset": 0, 00:12:03.640 "data_size": 65536 00:12:03.640 }, 00:12:03.640 { 00:12:03.640 "name": "BaseBdev3", 00:12:03.640 "uuid": "0bfcd92b-5249-4588-bcce-4d581dca47a9", 00:12:03.640 "is_configured": true, 00:12:03.640 "data_offset": 0, 00:12:03.640 "data_size": 65536 00:12:03.640 }, 00:12:03.640 { 00:12:03.640 "name": "BaseBdev4", 00:12:03.640 "uuid": "0fb958a2-e3ea-4f5b-ac23-d5365398227a", 00:12:03.640 "is_configured": true, 00:12:03.640 "data_offset": 0, 00:12:03.640 "data_size": 65536 00:12:03.640 } 00:12:03.640 ] 00:12:03.640 }' 00:12:03.640 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.640 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 [2024-11-06 12:42:52.510492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.899 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.900 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.900 "name": "Existed_Raid", 00:12:03.900 "aliases": [ 00:12:03.900 "19e2ae07-24e9-4859-9b77-7ed4333a6600" 00:12:03.900 ], 00:12:03.900 "product_name": "Raid Volume", 00:12:03.900 "block_size": 512, 00:12:03.900 "num_blocks": 65536, 00:12:03.900 "uuid": "19e2ae07-24e9-4859-9b77-7ed4333a6600", 00:12:03.900 "assigned_rate_limits": { 00:12:03.900 "rw_ios_per_sec": 0, 00:12:03.900 "rw_mbytes_per_sec": 0, 00:12:03.900 "r_mbytes_per_sec": 0, 00:12:03.900 "w_mbytes_per_sec": 0 00:12:03.900 }, 00:12:03.900 "claimed": false, 00:12:03.900 "zoned": false, 00:12:03.900 "supported_io_types": { 00:12:03.900 "read": true, 00:12:03.900 "write": true, 00:12:03.900 "unmap": false, 00:12:03.900 "flush": false, 00:12:03.900 "reset": true, 00:12:03.900 "nvme_admin": false, 00:12:03.900 "nvme_io": false, 00:12:03.900 "nvme_io_md": false, 00:12:03.900 "write_zeroes": true, 00:12:03.900 "zcopy": false, 00:12:03.900 "get_zone_info": false, 00:12:03.900 "zone_management": false, 00:12:03.900 "zone_append": false, 00:12:03.900 "compare": false, 00:12:03.900 "compare_and_write": false, 00:12:03.900 "abort": false, 00:12:03.900 "seek_hole": false, 00:12:03.900 "seek_data": false, 00:12:03.900 "copy": false, 00:12:03.900 "nvme_iov_md": false 00:12:03.900 }, 00:12:03.900 "memory_domains": [ 00:12:03.900 { 00:12:03.900 "dma_device_id": "system", 00:12:03.900 "dma_device_type": 1 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.900 "dma_device_type": 2 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "system", 00:12:03.900 "dma_device_type": 1 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.900 "dma_device_type": 2 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "system", 00:12:03.900 "dma_device_type": 1 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.900 "dma_device_type": 2 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "system", 00:12:03.900 "dma_device_type": 1 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.900 "dma_device_type": 2 00:12:03.900 } 00:12:03.900 ], 00:12:03.900 "driver_specific": { 00:12:03.900 "raid": { 00:12:03.900 "uuid": "19e2ae07-24e9-4859-9b77-7ed4333a6600", 00:12:03.900 "strip_size_kb": 0, 00:12:03.900 "state": "online", 00:12:03.900 "raid_level": "raid1", 00:12:03.900 "superblock": false, 00:12:03.900 "num_base_bdevs": 4, 00:12:03.900 "num_base_bdevs_discovered": 4, 00:12:03.900 "num_base_bdevs_operational": 4, 00:12:03.900 "base_bdevs_list": [ 00:12:03.900 { 00:12:03.900 "name": "BaseBdev1", 00:12:03.900 "uuid": "75614e92-7094-4a05-b9f9-ecc0bef7983d", 00:12:03.900 "is_configured": true, 00:12:03.900 "data_offset": 0, 00:12:03.900 "data_size": 65536 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "name": "BaseBdev2", 00:12:03.900 "uuid": "15e33f51-cc44-40d4-b1d5-c2b4d2c29573", 00:12:03.900 "is_configured": true, 00:12:03.900 "data_offset": 0, 00:12:03.900 "data_size": 65536 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "name": "BaseBdev3", 00:12:03.900 "uuid": "0bfcd92b-5249-4588-bcce-4d581dca47a9", 00:12:03.900 "is_configured": true, 00:12:03.900 "data_offset": 0, 00:12:03.900 "data_size": 65536 00:12:03.900 }, 00:12:03.900 { 00:12:03.900 "name": "BaseBdev4", 00:12:03.900 "uuid": "0fb958a2-e3ea-4f5b-ac23-d5365398227a", 00:12:03.900 "is_configured": true, 00:12:03.900 "data_offset": 0, 00:12:03.900 "data_size": 65536 00:12:03.900 } 00:12:03.900 ] 00:12:03.900 } 00:12:03.900 } 00:12:03.900 }' 00:12:03.900 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:04.159 BaseBdev2 00:12:04.159 BaseBdev3 00:12:04.159 BaseBdev4' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.159 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.418 [2024-11-06 12:42:52.870175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.418 12:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.418 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.418 "name": "Existed_Raid", 00:12:04.418 "uuid": "19e2ae07-24e9-4859-9b77-7ed4333a6600", 00:12:04.418 "strip_size_kb": 0, 00:12:04.418 "state": "online", 00:12:04.418 "raid_level": "raid1", 00:12:04.418 "superblock": false, 00:12:04.418 "num_base_bdevs": 4, 00:12:04.418 "num_base_bdevs_discovered": 3, 00:12:04.418 "num_base_bdevs_operational": 3, 00:12:04.418 "base_bdevs_list": [ 00:12:04.418 { 00:12:04.418 "name": null, 00:12:04.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.418 "is_configured": false, 00:12:04.418 "data_offset": 0, 00:12:04.418 "data_size": 65536 00:12:04.418 }, 00:12:04.418 { 00:12:04.418 "name": "BaseBdev2", 00:12:04.418 "uuid": "15e33f51-cc44-40d4-b1d5-c2b4d2c29573", 00:12:04.418 "is_configured": true, 00:12:04.418 "data_offset": 0, 00:12:04.418 "data_size": 65536 00:12:04.418 }, 00:12:04.418 { 00:12:04.418 "name": "BaseBdev3", 00:12:04.418 "uuid": "0bfcd92b-5249-4588-bcce-4d581dca47a9", 00:12:04.418 "is_configured": true, 00:12:04.418 "data_offset": 0, 00:12:04.418 "data_size": 65536 00:12:04.418 }, 00:12:04.418 { 00:12:04.418 "name": "BaseBdev4", 00:12:04.418 "uuid": "0fb958a2-e3ea-4f5b-ac23-d5365398227a", 00:12:04.418 "is_configured": true, 00:12:04.418 "data_offset": 0, 00:12:04.418 "data_size": 65536 00:12:04.418 } 00:12:04.418 ] 00:12:04.418 }' 00:12:04.418 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.418 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.989 [2024-11-06 12:42:53.536457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.989 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.990 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.990 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.990 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.990 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.248 [2024-11-06 12:42:53.683512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.248 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.248 [2024-11-06 12:42:53.843712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:05.248 [2024-11-06 12:42:53.843845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.507 [2024-11-06 12:42:53.928732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.507 [2024-11-06 12:42:53.928806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.507 [2024-11-06 12:42:53.928839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.507 12:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.507 BaseBdev2 00:12:05.507 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.507 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.508 [ 00:12:05.508 { 00:12:05.508 "name": "BaseBdev2", 00:12:05.508 "aliases": [ 00:12:05.508 "f0f40fe6-8730-4897-9491-fff052caf355" 00:12:05.508 ], 00:12:05.508 "product_name": "Malloc disk", 00:12:05.508 "block_size": 512, 00:12:05.508 "num_blocks": 65536, 00:12:05.508 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:05.508 "assigned_rate_limits": { 00:12:05.508 "rw_ios_per_sec": 0, 00:12:05.508 "rw_mbytes_per_sec": 0, 00:12:05.508 "r_mbytes_per_sec": 0, 00:12:05.508 "w_mbytes_per_sec": 0 00:12:05.508 }, 00:12:05.508 "claimed": false, 00:12:05.508 "zoned": false, 00:12:05.508 "supported_io_types": { 00:12:05.508 "read": true, 00:12:05.508 "write": true, 00:12:05.508 "unmap": true, 00:12:05.508 "flush": true, 00:12:05.508 "reset": true, 00:12:05.508 "nvme_admin": false, 00:12:05.508 "nvme_io": false, 00:12:05.508 "nvme_io_md": false, 00:12:05.508 "write_zeroes": true, 00:12:05.508 "zcopy": true, 00:12:05.508 "get_zone_info": false, 00:12:05.508 "zone_management": false, 00:12:05.508 "zone_append": false, 00:12:05.508 "compare": false, 00:12:05.508 "compare_and_write": false, 00:12:05.508 "abort": true, 00:12:05.508 "seek_hole": false, 00:12:05.508 "seek_data": false, 00:12:05.508 "copy": true, 00:12:05.508 "nvme_iov_md": false 00:12:05.508 }, 00:12:05.508 "memory_domains": [ 00:12:05.508 { 00:12:05.508 "dma_device_id": "system", 00:12:05.508 "dma_device_type": 1 00:12:05.508 }, 00:12:05.508 { 00:12:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.508 "dma_device_type": 2 00:12:05.508 } 00:12:05.508 ], 00:12:05.508 "driver_specific": {} 00:12:05.508 } 00:12:05.508 ] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.508 BaseBdev3 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.508 [ 00:12:05.508 { 00:12:05.508 "name": "BaseBdev3", 00:12:05.508 "aliases": [ 00:12:05.508 "d09473ed-d2e7-4756-b68d-8deead1ca352" 00:12:05.508 ], 00:12:05.508 "product_name": "Malloc disk", 00:12:05.508 "block_size": 512, 00:12:05.508 "num_blocks": 65536, 00:12:05.508 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:05.508 "assigned_rate_limits": { 00:12:05.508 "rw_ios_per_sec": 0, 00:12:05.508 "rw_mbytes_per_sec": 0, 00:12:05.508 "r_mbytes_per_sec": 0, 00:12:05.508 "w_mbytes_per_sec": 0 00:12:05.508 }, 00:12:05.508 "claimed": false, 00:12:05.508 "zoned": false, 00:12:05.508 "supported_io_types": { 00:12:05.508 "read": true, 00:12:05.508 "write": true, 00:12:05.508 "unmap": true, 00:12:05.508 "flush": true, 00:12:05.508 "reset": true, 00:12:05.508 "nvme_admin": false, 00:12:05.508 "nvme_io": false, 00:12:05.508 "nvme_io_md": false, 00:12:05.508 "write_zeroes": true, 00:12:05.508 "zcopy": true, 00:12:05.508 "get_zone_info": false, 00:12:05.508 "zone_management": false, 00:12:05.508 "zone_append": false, 00:12:05.508 "compare": false, 00:12:05.508 "compare_and_write": false, 00:12:05.508 "abort": true, 00:12:05.508 "seek_hole": false, 00:12:05.508 "seek_data": false, 00:12:05.508 "copy": true, 00:12:05.508 "nvme_iov_md": false 00:12:05.508 }, 00:12:05.508 "memory_domains": [ 00:12:05.508 { 00:12:05.508 "dma_device_id": "system", 00:12:05.508 "dma_device_type": 1 00:12:05.508 }, 00:12:05.508 { 00:12:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.508 "dma_device_type": 2 00:12:05.508 } 00:12:05.508 ], 00:12:05.508 "driver_specific": {} 00:12:05.508 } 00:12:05.508 ] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.508 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.767 BaseBdev4 00:12:05.767 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.767 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:05.767 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:05.767 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:05.767 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:05.767 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.768 [ 00:12:05.768 { 00:12:05.768 "name": "BaseBdev4", 00:12:05.768 "aliases": [ 00:12:05.768 "b893b8d9-836e-405b-be21-d366eef46b97" 00:12:05.768 ], 00:12:05.768 "product_name": "Malloc disk", 00:12:05.768 "block_size": 512, 00:12:05.768 "num_blocks": 65536, 00:12:05.768 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:05.768 "assigned_rate_limits": { 00:12:05.768 "rw_ios_per_sec": 0, 00:12:05.768 "rw_mbytes_per_sec": 0, 00:12:05.768 "r_mbytes_per_sec": 0, 00:12:05.768 "w_mbytes_per_sec": 0 00:12:05.768 }, 00:12:05.768 "claimed": false, 00:12:05.768 "zoned": false, 00:12:05.768 "supported_io_types": { 00:12:05.768 "read": true, 00:12:05.768 "write": true, 00:12:05.768 "unmap": true, 00:12:05.768 "flush": true, 00:12:05.768 "reset": true, 00:12:05.768 "nvme_admin": false, 00:12:05.768 "nvme_io": false, 00:12:05.768 "nvme_io_md": false, 00:12:05.768 "write_zeroes": true, 00:12:05.768 "zcopy": true, 00:12:05.768 "get_zone_info": false, 00:12:05.768 "zone_management": false, 00:12:05.768 "zone_append": false, 00:12:05.768 "compare": false, 00:12:05.768 "compare_and_write": false, 00:12:05.768 "abort": true, 00:12:05.768 "seek_hole": false, 00:12:05.768 "seek_data": false, 00:12:05.768 "copy": true, 00:12:05.768 "nvme_iov_md": false 00:12:05.768 }, 00:12:05.768 "memory_domains": [ 00:12:05.768 { 00:12:05.768 "dma_device_id": "system", 00:12:05.768 "dma_device_type": 1 00:12:05.768 }, 00:12:05.768 { 00:12:05.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.768 "dma_device_type": 2 00:12:05.768 } 00:12:05.768 ], 00:12:05.768 "driver_specific": {} 00:12:05.768 } 00:12:05.768 ] 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.768 [2024-11-06 12:42:54.224126] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.768 [2024-11-06 12:42:54.224417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.768 [2024-11-06 12:42:54.224565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.768 [2024-11-06 12:42:54.227061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.768 [2024-11-06 12:42:54.227129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.768 "name": "Existed_Raid", 00:12:05.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.768 "strip_size_kb": 0, 00:12:05.768 "state": "configuring", 00:12:05.768 "raid_level": "raid1", 00:12:05.768 "superblock": false, 00:12:05.768 "num_base_bdevs": 4, 00:12:05.768 "num_base_bdevs_discovered": 3, 00:12:05.768 "num_base_bdevs_operational": 4, 00:12:05.768 "base_bdevs_list": [ 00:12:05.768 { 00:12:05.768 "name": "BaseBdev1", 00:12:05.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.768 "is_configured": false, 00:12:05.768 "data_offset": 0, 00:12:05.768 "data_size": 0 00:12:05.768 }, 00:12:05.768 { 00:12:05.768 "name": "BaseBdev2", 00:12:05.768 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:05.768 "is_configured": true, 00:12:05.768 "data_offset": 0, 00:12:05.768 "data_size": 65536 00:12:05.768 }, 00:12:05.768 { 00:12:05.768 "name": "BaseBdev3", 00:12:05.768 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:05.768 "is_configured": true, 00:12:05.768 "data_offset": 0, 00:12:05.768 "data_size": 65536 00:12:05.768 }, 00:12:05.768 { 00:12:05.768 "name": "BaseBdev4", 00:12:05.768 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:05.768 "is_configured": true, 00:12:05.768 "data_offset": 0, 00:12:05.768 "data_size": 65536 00:12:05.768 } 00:12:05.768 ] 00:12:05.768 }' 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.768 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.335 [2024-11-06 12:42:54.756324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.335 "name": "Existed_Raid", 00:12:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.335 "strip_size_kb": 0, 00:12:06.335 "state": "configuring", 00:12:06.335 "raid_level": "raid1", 00:12:06.335 "superblock": false, 00:12:06.335 "num_base_bdevs": 4, 00:12:06.335 "num_base_bdevs_discovered": 2, 00:12:06.335 "num_base_bdevs_operational": 4, 00:12:06.335 "base_bdevs_list": [ 00:12:06.335 { 00:12:06.335 "name": "BaseBdev1", 00:12:06.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.335 "is_configured": false, 00:12:06.335 "data_offset": 0, 00:12:06.335 "data_size": 0 00:12:06.335 }, 00:12:06.335 { 00:12:06.335 "name": null, 00:12:06.335 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:06.335 "is_configured": false, 00:12:06.335 "data_offset": 0, 00:12:06.335 "data_size": 65536 00:12:06.335 }, 00:12:06.335 { 00:12:06.335 "name": "BaseBdev3", 00:12:06.335 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:06.335 "is_configured": true, 00:12:06.335 "data_offset": 0, 00:12:06.335 "data_size": 65536 00:12:06.335 }, 00:12:06.335 { 00:12:06.335 "name": "BaseBdev4", 00:12:06.335 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:06.335 "is_configured": true, 00:12:06.335 "data_offset": 0, 00:12:06.335 "data_size": 65536 00:12:06.335 } 00:12:06.335 ] 00:12:06.335 }' 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.335 12:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.902 [2024-11-06 12:42:55.362792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.902 BaseBdev1 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.902 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.903 [ 00:12:06.903 { 00:12:06.903 "name": "BaseBdev1", 00:12:06.903 "aliases": [ 00:12:06.903 "b490e93b-9a18-40ac-926d-139875c064a3" 00:12:06.903 ], 00:12:06.903 "product_name": "Malloc disk", 00:12:06.903 "block_size": 512, 00:12:06.903 "num_blocks": 65536, 00:12:06.903 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:06.903 "assigned_rate_limits": { 00:12:06.903 "rw_ios_per_sec": 0, 00:12:06.903 "rw_mbytes_per_sec": 0, 00:12:06.903 "r_mbytes_per_sec": 0, 00:12:06.903 "w_mbytes_per_sec": 0 00:12:06.903 }, 00:12:06.903 "claimed": true, 00:12:06.903 "claim_type": "exclusive_write", 00:12:06.903 "zoned": false, 00:12:06.903 "supported_io_types": { 00:12:06.903 "read": true, 00:12:06.903 "write": true, 00:12:06.903 "unmap": true, 00:12:06.903 "flush": true, 00:12:06.903 "reset": true, 00:12:06.903 "nvme_admin": false, 00:12:06.903 "nvme_io": false, 00:12:06.903 "nvme_io_md": false, 00:12:06.903 "write_zeroes": true, 00:12:06.903 "zcopy": true, 00:12:06.903 "get_zone_info": false, 00:12:06.903 "zone_management": false, 00:12:06.903 "zone_append": false, 00:12:06.903 "compare": false, 00:12:06.903 "compare_and_write": false, 00:12:06.903 "abort": true, 00:12:06.903 "seek_hole": false, 00:12:06.903 "seek_data": false, 00:12:06.903 "copy": true, 00:12:06.903 "nvme_iov_md": false 00:12:06.903 }, 00:12:06.903 "memory_domains": [ 00:12:06.903 { 00:12:06.903 "dma_device_id": "system", 00:12:06.903 "dma_device_type": 1 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.903 "dma_device_type": 2 00:12:06.903 } 00:12:06.903 ], 00:12:06.903 "driver_specific": {} 00:12:06.903 } 00:12:06.903 ] 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.903 "name": "Existed_Raid", 00:12:06.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.903 "strip_size_kb": 0, 00:12:06.903 "state": "configuring", 00:12:06.903 "raid_level": "raid1", 00:12:06.903 "superblock": false, 00:12:06.903 "num_base_bdevs": 4, 00:12:06.903 "num_base_bdevs_discovered": 3, 00:12:06.903 "num_base_bdevs_operational": 4, 00:12:06.903 "base_bdevs_list": [ 00:12:06.903 { 00:12:06.903 "name": "BaseBdev1", 00:12:06.903 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:06.903 "is_configured": true, 00:12:06.903 "data_offset": 0, 00:12:06.903 "data_size": 65536 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "name": null, 00:12:06.903 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:06.903 "is_configured": false, 00:12:06.903 "data_offset": 0, 00:12:06.903 "data_size": 65536 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "name": "BaseBdev3", 00:12:06.903 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:06.903 "is_configured": true, 00:12:06.903 "data_offset": 0, 00:12:06.903 "data_size": 65536 00:12:06.903 }, 00:12:06.903 { 00:12:06.903 "name": "BaseBdev4", 00:12:06.903 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:06.903 "is_configured": true, 00:12:06.903 "data_offset": 0, 00:12:06.903 "data_size": 65536 00:12:06.903 } 00:12:06.903 ] 00:12:06.903 }' 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.903 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.491 [2024-11-06 12:42:55.975086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.491 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.492 12:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.492 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.492 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.492 "name": "Existed_Raid", 00:12:07.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.492 "strip_size_kb": 0, 00:12:07.492 "state": "configuring", 00:12:07.492 "raid_level": "raid1", 00:12:07.492 "superblock": false, 00:12:07.492 "num_base_bdevs": 4, 00:12:07.492 "num_base_bdevs_discovered": 2, 00:12:07.492 "num_base_bdevs_operational": 4, 00:12:07.492 "base_bdevs_list": [ 00:12:07.492 { 00:12:07.492 "name": "BaseBdev1", 00:12:07.492 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:07.492 "is_configured": true, 00:12:07.492 "data_offset": 0, 00:12:07.492 "data_size": 65536 00:12:07.492 }, 00:12:07.492 { 00:12:07.492 "name": null, 00:12:07.492 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:07.492 "is_configured": false, 00:12:07.492 "data_offset": 0, 00:12:07.492 "data_size": 65536 00:12:07.492 }, 00:12:07.492 { 00:12:07.492 "name": null, 00:12:07.492 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:07.492 "is_configured": false, 00:12:07.492 "data_offset": 0, 00:12:07.492 "data_size": 65536 00:12:07.492 }, 00:12:07.492 { 00:12:07.492 "name": "BaseBdev4", 00:12:07.492 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:07.492 "is_configured": true, 00:12:07.492 "data_offset": 0, 00:12:07.492 "data_size": 65536 00:12:07.492 } 00:12:07.492 ] 00:12:07.492 }' 00:12:07.492 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.492 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.057 [2024-11-06 12:42:56.571233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.057 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.058 "name": "Existed_Raid", 00:12:08.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.058 "strip_size_kb": 0, 00:12:08.058 "state": "configuring", 00:12:08.058 "raid_level": "raid1", 00:12:08.058 "superblock": false, 00:12:08.058 "num_base_bdevs": 4, 00:12:08.058 "num_base_bdevs_discovered": 3, 00:12:08.058 "num_base_bdevs_operational": 4, 00:12:08.058 "base_bdevs_list": [ 00:12:08.058 { 00:12:08.058 "name": "BaseBdev1", 00:12:08.058 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:08.058 "is_configured": true, 00:12:08.058 "data_offset": 0, 00:12:08.058 "data_size": 65536 00:12:08.058 }, 00:12:08.058 { 00:12:08.058 "name": null, 00:12:08.058 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:08.058 "is_configured": false, 00:12:08.058 "data_offset": 0, 00:12:08.058 "data_size": 65536 00:12:08.058 }, 00:12:08.058 { 00:12:08.058 "name": "BaseBdev3", 00:12:08.058 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:08.058 "is_configured": true, 00:12:08.058 "data_offset": 0, 00:12:08.058 "data_size": 65536 00:12:08.058 }, 00:12:08.058 { 00:12:08.058 "name": "BaseBdev4", 00:12:08.058 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:08.058 "is_configured": true, 00:12:08.058 "data_offset": 0, 00:12:08.058 "data_size": 65536 00:12:08.058 } 00:12:08.058 ] 00:12:08.058 }' 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.058 12:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.624 [2024-11-06 12:42:57.127409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.624 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.625 "name": "Existed_Raid", 00:12:08.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.625 "strip_size_kb": 0, 00:12:08.625 "state": "configuring", 00:12:08.625 "raid_level": "raid1", 00:12:08.625 "superblock": false, 00:12:08.625 "num_base_bdevs": 4, 00:12:08.625 "num_base_bdevs_discovered": 2, 00:12:08.625 "num_base_bdevs_operational": 4, 00:12:08.625 "base_bdevs_list": [ 00:12:08.625 { 00:12:08.625 "name": null, 00:12:08.625 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:08.625 "is_configured": false, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 65536 00:12:08.625 }, 00:12:08.625 { 00:12:08.625 "name": null, 00:12:08.625 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:08.625 "is_configured": false, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 65536 00:12:08.625 }, 00:12:08.625 { 00:12:08.625 "name": "BaseBdev3", 00:12:08.625 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:08.625 "is_configured": true, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 65536 00:12:08.625 }, 00:12:08.625 { 00:12:08.625 "name": "BaseBdev4", 00:12:08.625 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:08.625 "is_configured": true, 00:12:08.625 "data_offset": 0, 00:12:08.625 "data_size": 65536 00:12:08.625 } 00:12:08.625 ] 00:12:08.625 }' 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.625 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 [2024-11-06 12:42:57.764254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.192 "name": "Existed_Raid", 00:12:09.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.192 "strip_size_kb": 0, 00:12:09.192 "state": "configuring", 00:12:09.192 "raid_level": "raid1", 00:12:09.192 "superblock": false, 00:12:09.192 "num_base_bdevs": 4, 00:12:09.192 "num_base_bdevs_discovered": 3, 00:12:09.192 "num_base_bdevs_operational": 4, 00:12:09.192 "base_bdevs_list": [ 00:12:09.192 { 00:12:09.192 "name": null, 00:12:09.192 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:09.192 "is_configured": false, 00:12:09.192 "data_offset": 0, 00:12:09.192 "data_size": 65536 00:12:09.192 }, 00:12:09.192 { 00:12:09.192 "name": "BaseBdev2", 00:12:09.192 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:09.192 "is_configured": true, 00:12:09.192 "data_offset": 0, 00:12:09.192 "data_size": 65536 00:12:09.192 }, 00:12:09.192 { 00:12:09.192 "name": "BaseBdev3", 00:12:09.192 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:09.192 "is_configured": true, 00:12:09.192 "data_offset": 0, 00:12:09.192 "data_size": 65536 00:12:09.192 }, 00:12:09.192 { 00:12:09.192 "name": "BaseBdev4", 00:12:09.192 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:09.192 "is_configured": true, 00:12:09.192 "data_offset": 0, 00:12:09.192 "data_size": 65536 00:12:09.192 } 00:12:09.192 ] 00:12:09.192 }' 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.192 12:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b490e93b-9a18-40ac-926d-139875c064a3 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.759 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.018 [2024-11-06 12:42:58.438041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:10.018 [2024-11-06 12:42:58.438100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.018 [2024-11-06 12:42:58.438119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.018 [2024-11-06 12:42:58.438495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:10.018 [2024-11-06 12:42:58.438692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.018 [2024-11-06 12:42:58.438716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:10.018 [2024-11-06 12:42:58.439014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.018 NewBaseBdev 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.018 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.018 [ 00:12:10.018 { 00:12:10.018 "name": "NewBaseBdev", 00:12:10.018 "aliases": [ 00:12:10.018 "b490e93b-9a18-40ac-926d-139875c064a3" 00:12:10.018 ], 00:12:10.018 "product_name": "Malloc disk", 00:12:10.018 "block_size": 512, 00:12:10.018 "num_blocks": 65536, 00:12:10.018 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:10.018 "assigned_rate_limits": { 00:12:10.018 "rw_ios_per_sec": 0, 00:12:10.018 "rw_mbytes_per_sec": 0, 00:12:10.018 "r_mbytes_per_sec": 0, 00:12:10.018 "w_mbytes_per_sec": 0 00:12:10.018 }, 00:12:10.018 "claimed": true, 00:12:10.018 "claim_type": "exclusive_write", 00:12:10.018 "zoned": false, 00:12:10.018 "supported_io_types": { 00:12:10.018 "read": true, 00:12:10.018 "write": true, 00:12:10.018 "unmap": true, 00:12:10.018 "flush": true, 00:12:10.018 "reset": true, 00:12:10.018 "nvme_admin": false, 00:12:10.018 "nvme_io": false, 00:12:10.018 "nvme_io_md": false, 00:12:10.018 "write_zeroes": true, 00:12:10.018 "zcopy": true, 00:12:10.018 "get_zone_info": false, 00:12:10.018 "zone_management": false, 00:12:10.018 "zone_append": false, 00:12:10.018 "compare": false, 00:12:10.019 "compare_and_write": false, 00:12:10.019 "abort": true, 00:12:10.019 "seek_hole": false, 00:12:10.019 "seek_data": false, 00:12:10.019 "copy": true, 00:12:10.019 "nvme_iov_md": false 00:12:10.019 }, 00:12:10.019 "memory_domains": [ 00:12:10.019 { 00:12:10.019 "dma_device_id": "system", 00:12:10.019 "dma_device_type": 1 00:12:10.019 }, 00:12:10.019 { 00:12:10.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.019 "dma_device_type": 2 00:12:10.019 } 00:12:10.019 ], 00:12:10.019 "driver_specific": {} 00:12:10.019 } 00:12:10.019 ] 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.019 "name": "Existed_Raid", 00:12:10.019 "uuid": "c8c59ce2-2dea-40ef-8e66-3819dbadc5e6", 00:12:10.019 "strip_size_kb": 0, 00:12:10.019 "state": "online", 00:12:10.019 "raid_level": "raid1", 00:12:10.019 "superblock": false, 00:12:10.019 "num_base_bdevs": 4, 00:12:10.019 "num_base_bdevs_discovered": 4, 00:12:10.019 "num_base_bdevs_operational": 4, 00:12:10.019 "base_bdevs_list": [ 00:12:10.019 { 00:12:10.019 "name": "NewBaseBdev", 00:12:10.019 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:10.019 "is_configured": true, 00:12:10.019 "data_offset": 0, 00:12:10.019 "data_size": 65536 00:12:10.019 }, 00:12:10.019 { 00:12:10.019 "name": "BaseBdev2", 00:12:10.019 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:10.019 "is_configured": true, 00:12:10.019 "data_offset": 0, 00:12:10.019 "data_size": 65536 00:12:10.019 }, 00:12:10.019 { 00:12:10.019 "name": "BaseBdev3", 00:12:10.019 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:10.019 "is_configured": true, 00:12:10.019 "data_offset": 0, 00:12:10.019 "data_size": 65536 00:12:10.019 }, 00:12:10.019 { 00:12:10.019 "name": "BaseBdev4", 00:12:10.019 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:10.019 "is_configured": true, 00:12:10.019 "data_offset": 0, 00:12:10.019 "data_size": 65536 00:12:10.019 } 00:12:10.019 ] 00:12:10.019 }' 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.019 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.599 12:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.599 [2024-11-06 12:42:59.006768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.599 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.599 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.599 "name": "Existed_Raid", 00:12:10.599 "aliases": [ 00:12:10.599 "c8c59ce2-2dea-40ef-8e66-3819dbadc5e6" 00:12:10.599 ], 00:12:10.599 "product_name": "Raid Volume", 00:12:10.599 "block_size": 512, 00:12:10.599 "num_blocks": 65536, 00:12:10.599 "uuid": "c8c59ce2-2dea-40ef-8e66-3819dbadc5e6", 00:12:10.599 "assigned_rate_limits": { 00:12:10.599 "rw_ios_per_sec": 0, 00:12:10.599 "rw_mbytes_per_sec": 0, 00:12:10.599 "r_mbytes_per_sec": 0, 00:12:10.599 "w_mbytes_per_sec": 0 00:12:10.599 }, 00:12:10.599 "claimed": false, 00:12:10.599 "zoned": false, 00:12:10.599 "supported_io_types": { 00:12:10.599 "read": true, 00:12:10.599 "write": true, 00:12:10.599 "unmap": false, 00:12:10.599 "flush": false, 00:12:10.599 "reset": true, 00:12:10.599 "nvme_admin": false, 00:12:10.599 "nvme_io": false, 00:12:10.599 "nvme_io_md": false, 00:12:10.599 "write_zeroes": true, 00:12:10.599 "zcopy": false, 00:12:10.599 "get_zone_info": false, 00:12:10.599 "zone_management": false, 00:12:10.599 "zone_append": false, 00:12:10.599 "compare": false, 00:12:10.599 "compare_and_write": false, 00:12:10.599 "abort": false, 00:12:10.599 "seek_hole": false, 00:12:10.599 "seek_data": false, 00:12:10.599 "copy": false, 00:12:10.599 "nvme_iov_md": false 00:12:10.599 }, 00:12:10.599 "memory_domains": [ 00:12:10.599 { 00:12:10.599 "dma_device_id": "system", 00:12:10.599 "dma_device_type": 1 00:12:10.599 }, 00:12:10.599 { 00:12:10.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.599 "dma_device_type": 2 00:12:10.599 }, 00:12:10.599 { 00:12:10.600 "dma_device_id": "system", 00:12:10.600 "dma_device_type": 1 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.600 "dma_device_type": 2 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "dma_device_id": "system", 00:12:10.600 "dma_device_type": 1 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.600 "dma_device_type": 2 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "dma_device_id": "system", 00:12:10.600 "dma_device_type": 1 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.600 "dma_device_type": 2 00:12:10.600 } 00:12:10.600 ], 00:12:10.600 "driver_specific": { 00:12:10.600 "raid": { 00:12:10.600 "uuid": "c8c59ce2-2dea-40ef-8e66-3819dbadc5e6", 00:12:10.600 "strip_size_kb": 0, 00:12:10.600 "state": "online", 00:12:10.600 "raid_level": "raid1", 00:12:10.600 "superblock": false, 00:12:10.600 "num_base_bdevs": 4, 00:12:10.600 "num_base_bdevs_discovered": 4, 00:12:10.600 "num_base_bdevs_operational": 4, 00:12:10.600 "base_bdevs_list": [ 00:12:10.600 { 00:12:10.600 "name": "NewBaseBdev", 00:12:10.600 "uuid": "b490e93b-9a18-40ac-926d-139875c064a3", 00:12:10.600 "is_configured": true, 00:12:10.600 "data_offset": 0, 00:12:10.600 "data_size": 65536 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "name": "BaseBdev2", 00:12:10.600 "uuid": "f0f40fe6-8730-4897-9491-fff052caf355", 00:12:10.600 "is_configured": true, 00:12:10.600 "data_offset": 0, 00:12:10.600 "data_size": 65536 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "name": "BaseBdev3", 00:12:10.600 "uuid": "d09473ed-d2e7-4756-b68d-8deead1ca352", 00:12:10.600 "is_configured": true, 00:12:10.600 "data_offset": 0, 00:12:10.600 "data_size": 65536 00:12:10.600 }, 00:12:10.600 { 00:12:10.600 "name": "BaseBdev4", 00:12:10.600 "uuid": "b893b8d9-836e-405b-be21-d366eef46b97", 00:12:10.600 "is_configured": true, 00:12:10.600 "data_offset": 0, 00:12:10.600 "data_size": 65536 00:12:10.600 } 00:12:10.600 ] 00:12:10.600 } 00:12:10.600 } 00:12:10.600 }' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:10.600 BaseBdev2 00:12:10.600 BaseBdev3 00:12:10.600 BaseBdev4' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.600 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.859 [2024-11-06 12:42:59.350354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.859 [2024-11-06 12:42:59.350398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.859 [2024-11-06 12:42:59.350505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.859 [2024-11-06 12:42:59.350864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.859 [2024-11-06 12:42:59.350904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73341 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73341 ']' 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73341 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73341 00:12:10.859 killing process with pid 73341 00:12:10.859 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.860 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.860 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73341' 00:12:10.860 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73341 00:12:10.860 [2024-11-06 12:42:59.388357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.860 12:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73341 00:12:11.119 [2024-11-06 12:42:59.747175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:12.494 00:12:12.494 real 0m12.908s 00:12:12.494 user 0m21.456s 00:12:12.494 sys 0m1.769s 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.494 ************************************ 00:12:12.494 END TEST raid_state_function_test 00:12:12.494 ************************************ 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.494 12:43:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:12.494 12:43:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:12.494 12:43:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.494 12:43:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.494 ************************************ 00:12:12.494 START TEST raid_state_function_test_sb 00:12:12.494 ************************************ 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:12.494 Process raid pid: 74025 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74025 00:12:12.494 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74025' 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74025 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74025 ']' 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.495 12:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.495 [2024-11-06 12:43:00.965617] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:12:12.495 [2024-11-06 12:43:00.966086] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.761 [2024-11-06 12:43:01.155954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.761 [2024-11-06 12:43:01.289788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.020 [2024-11-06 12:43:01.498078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.020 [2024-11-06 12:43:01.498276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.589 12:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.589 12:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:13.589 12:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.589 12:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.589 12:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.589 [2024-11-06 12:43:01.996955] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.589 [2024-11-06 12:43:01.997049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.589 [2024-11-06 12:43:01.997067] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.589 [2024-11-06 12:43:01.997083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.589 [2024-11-06 12:43:01.997093] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.589 [2024-11-06 12:43:01.997107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.589 [2024-11-06 12:43:01.997117] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.589 [2024-11-06 12:43:01.997131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.589 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.590 "name": "Existed_Raid", 00:12:13.590 "uuid": "1a6ce87d-e911-4dcf-88a2-ad0fa048f790", 00:12:13.590 "strip_size_kb": 0, 00:12:13.590 "state": "configuring", 00:12:13.590 "raid_level": "raid1", 00:12:13.590 "superblock": true, 00:12:13.590 "num_base_bdevs": 4, 00:12:13.590 "num_base_bdevs_discovered": 0, 00:12:13.590 "num_base_bdevs_operational": 4, 00:12:13.590 "base_bdevs_list": [ 00:12:13.590 { 00:12:13.590 "name": "BaseBdev1", 00:12:13.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.590 "is_configured": false, 00:12:13.590 "data_offset": 0, 00:12:13.590 "data_size": 0 00:12:13.590 }, 00:12:13.590 { 00:12:13.590 "name": "BaseBdev2", 00:12:13.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.590 "is_configured": false, 00:12:13.590 "data_offset": 0, 00:12:13.590 "data_size": 0 00:12:13.590 }, 00:12:13.590 { 00:12:13.590 "name": "BaseBdev3", 00:12:13.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.590 "is_configured": false, 00:12:13.590 "data_offset": 0, 00:12:13.590 "data_size": 0 00:12:13.590 }, 00:12:13.590 { 00:12:13.590 "name": "BaseBdev4", 00:12:13.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.590 "is_configured": false, 00:12:13.590 "data_offset": 0, 00:12:13.590 "data_size": 0 00:12:13.590 } 00:12:13.590 ] 00:12:13.590 }' 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.590 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.156 [2024-11-06 12:43:02.525094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.156 [2024-11-06 12:43:02.525382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.156 [2024-11-06 12:43:02.533038] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.156 [2024-11-06 12:43:02.533235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.156 [2024-11-06 12:43:02.533360] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.156 [2024-11-06 12:43:02.533421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.156 [2024-11-06 12:43:02.533524] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.156 [2024-11-06 12:43:02.533687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.156 [2024-11-06 12:43:02.533829] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.156 [2024-11-06 12:43:02.533887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.156 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.156 [2024-11-06 12:43:02.577948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.156 BaseBdev1 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.157 [ 00:12:14.157 { 00:12:14.157 "name": "BaseBdev1", 00:12:14.157 "aliases": [ 00:12:14.157 "8fbc234b-82a2-4699-84ef-00343aa1073b" 00:12:14.157 ], 00:12:14.157 "product_name": "Malloc disk", 00:12:14.157 "block_size": 512, 00:12:14.157 "num_blocks": 65536, 00:12:14.157 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:14.157 "assigned_rate_limits": { 00:12:14.157 "rw_ios_per_sec": 0, 00:12:14.157 "rw_mbytes_per_sec": 0, 00:12:14.157 "r_mbytes_per_sec": 0, 00:12:14.157 "w_mbytes_per_sec": 0 00:12:14.157 }, 00:12:14.157 "claimed": true, 00:12:14.157 "claim_type": "exclusive_write", 00:12:14.157 "zoned": false, 00:12:14.157 "supported_io_types": { 00:12:14.157 "read": true, 00:12:14.157 "write": true, 00:12:14.157 "unmap": true, 00:12:14.157 "flush": true, 00:12:14.157 "reset": true, 00:12:14.157 "nvme_admin": false, 00:12:14.157 "nvme_io": false, 00:12:14.157 "nvme_io_md": false, 00:12:14.157 "write_zeroes": true, 00:12:14.157 "zcopy": true, 00:12:14.157 "get_zone_info": false, 00:12:14.157 "zone_management": false, 00:12:14.157 "zone_append": false, 00:12:14.157 "compare": false, 00:12:14.157 "compare_and_write": false, 00:12:14.157 "abort": true, 00:12:14.157 "seek_hole": false, 00:12:14.157 "seek_data": false, 00:12:14.157 "copy": true, 00:12:14.157 "nvme_iov_md": false 00:12:14.157 }, 00:12:14.157 "memory_domains": [ 00:12:14.157 { 00:12:14.157 "dma_device_id": "system", 00:12:14.157 "dma_device_type": 1 00:12:14.157 }, 00:12:14.157 { 00:12:14.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.157 "dma_device_type": 2 00:12:14.157 } 00:12:14.157 ], 00:12:14.157 "driver_specific": {} 00:12:14.157 } 00:12:14.157 ] 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.157 "name": "Existed_Raid", 00:12:14.157 "uuid": "9de2f4c3-e0cf-463d-831f-eb5339fe5dad", 00:12:14.157 "strip_size_kb": 0, 00:12:14.157 "state": "configuring", 00:12:14.157 "raid_level": "raid1", 00:12:14.157 "superblock": true, 00:12:14.157 "num_base_bdevs": 4, 00:12:14.157 "num_base_bdevs_discovered": 1, 00:12:14.157 "num_base_bdevs_operational": 4, 00:12:14.157 "base_bdevs_list": [ 00:12:14.157 { 00:12:14.157 "name": "BaseBdev1", 00:12:14.157 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:14.157 "is_configured": true, 00:12:14.157 "data_offset": 2048, 00:12:14.157 "data_size": 63488 00:12:14.157 }, 00:12:14.157 { 00:12:14.157 "name": "BaseBdev2", 00:12:14.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.157 "is_configured": false, 00:12:14.157 "data_offset": 0, 00:12:14.157 "data_size": 0 00:12:14.157 }, 00:12:14.157 { 00:12:14.157 "name": "BaseBdev3", 00:12:14.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.157 "is_configured": false, 00:12:14.157 "data_offset": 0, 00:12:14.157 "data_size": 0 00:12:14.157 }, 00:12:14.157 { 00:12:14.157 "name": "BaseBdev4", 00:12:14.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.157 "is_configured": false, 00:12:14.157 "data_offset": 0, 00:12:14.157 "data_size": 0 00:12:14.157 } 00:12:14.157 ] 00:12:14.157 }' 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.157 12:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.724 [2024-11-06 12:43:03.130171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.724 [2024-11-06 12:43:03.130249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.724 [2024-11-06 12:43:03.138257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.724 [2024-11-06 12:43:03.140841] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.724 [2024-11-06 12:43:03.140898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.724 [2024-11-06 12:43:03.140916] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.724 [2024-11-06 12:43:03.140934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.724 [2024-11-06 12:43:03.140945] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.724 [2024-11-06 12:43:03.140959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.724 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.725 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.725 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.725 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.725 "name": "Existed_Raid", 00:12:14.725 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:14.725 "strip_size_kb": 0, 00:12:14.725 "state": "configuring", 00:12:14.725 "raid_level": "raid1", 00:12:14.725 "superblock": true, 00:12:14.725 "num_base_bdevs": 4, 00:12:14.725 "num_base_bdevs_discovered": 1, 00:12:14.725 "num_base_bdevs_operational": 4, 00:12:14.725 "base_bdevs_list": [ 00:12:14.725 { 00:12:14.725 "name": "BaseBdev1", 00:12:14.725 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:14.725 "is_configured": true, 00:12:14.725 "data_offset": 2048, 00:12:14.725 "data_size": 63488 00:12:14.725 }, 00:12:14.725 { 00:12:14.725 "name": "BaseBdev2", 00:12:14.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.725 "is_configured": false, 00:12:14.725 "data_offset": 0, 00:12:14.725 "data_size": 0 00:12:14.725 }, 00:12:14.725 { 00:12:14.725 "name": "BaseBdev3", 00:12:14.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.725 "is_configured": false, 00:12:14.725 "data_offset": 0, 00:12:14.725 "data_size": 0 00:12:14.725 }, 00:12:14.725 { 00:12:14.725 "name": "BaseBdev4", 00:12:14.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.725 "is_configured": false, 00:12:14.725 "data_offset": 0, 00:12:14.725 "data_size": 0 00:12:14.725 } 00:12:14.725 ] 00:12:14.725 }' 00:12:14.725 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.725 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 [2024-11-06 12:43:03.721475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.292 BaseBdev2 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 [ 00:12:15.292 { 00:12:15.292 "name": "BaseBdev2", 00:12:15.292 "aliases": [ 00:12:15.292 "eb47159a-4578-49eb-b28b-35e9d2386a5b" 00:12:15.292 ], 00:12:15.292 "product_name": "Malloc disk", 00:12:15.292 "block_size": 512, 00:12:15.292 "num_blocks": 65536, 00:12:15.292 "uuid": "eb47159a-4578-49eb-b28b-35e9d2386a5b", 00:12:15.292 "assigned_rate_limits": { 00:12:15.292 "rw_ios_per_sec": 0, 00:12:15.292 "rw_mbytes_per_sec": 0, 00:12:15.292 "r_mbytes_per_sec": 0, 00:12:15.292 "w_mbytes_per_sec": 0 00:12:15.292 }, 00:12:15.292 "claimed": true, 00:12:15.292 "claim_type": "exclusive_write", 00:12:15.292 "zoned": false, 00:12:15.292 "supported_io_types": { 00:12:15.292 "read": true, 00:12:15.292 "write": true, 00:12:15.292 "unmap": true, 00:12:15.292 "flush": true, 00:12:15.292 "reset": true, 00:12:15.292 "nvme_admin": false, 00:12:15.292 "nvme_io": false, 00:12:15.292 "nvme_io_md": false, 00:12:15.292 "write_zeroes": true, 00:12:15.292 "zcopy": true, 00:12:15.292 "get_zone_info": false, 00:12:15.292 "zone_management": false, 00:12:15.292 "zone_append": false, 00:12:15.292 "compare": false, 00:12:15.292 "compare_and_write": false, 00:12:15.292 "abort": true, 00:12:15.292 "seek_hole": false, 00:12:15.292 "seek_data": false, 00:12:15.292 "copy": true, 00:12:15.292 "nvme_iov_md": false 00:12:15.292 }, 00:12:15.292 "memory_domains": [ 00:12:15.292 { 00:12:15.292 "dma_device_id": "system", 00:12:15.292 "dma_device_type": 1 00:12:15.292 }, 00:12:15.292 { 00:12:15.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.292 "dma_device_type": 2 00:12:15.292 } 00:12:15.292 ], 00:12:15.292 "driver_specific": {} 00:12:15.292 } 00:12:15.292 ] 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.292 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.292 "name": "Existed_Raid", 00:12:15.292 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:15.292 "strip_size_kb": 0, 00:12:15.292 "state": "configuring", 00:12:15.292 "raid_level": "raid1", 00:12:15.292 "superblock": true, 00:12:15.292 "num_base_bdevs": 4, 00:12:15.293 "num_base_bdevs_discovered": 2, 00:12:15.293 "num_base_bdevs_operational": 4, 00:12:15.293 "base_bdevs_list": [ 00:12:15.293 { 00:12:15.293 "name": "BaseBdev1", 00:12:15.293 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:15.293 "is_configured": true, 00:12:15.293 "data_offset": 2048, 00:12:15.293 "data_size": 63488 00:12:15.293 }, 00:12:15.293 { 00:12:15.293 "name": "BaseBdev2", 00:12:15.293 "uuid": "eb47159a-4578-49eb-b28b-35e9d2386a5b", 00:12:15.293 "is_configured": true, 00:12:15.293 "data_offset": 2048, 00:12:15.293 "data_size": 63488 00:12:15.293 }, 00:12:15.293 { 00:12:15.293 "name": "BaseBdev3", 00:12:15.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.293 "is_configured": false, 00:12:15.293 "data_offset": 0, 00:12:15.293 "data_size": 0 00:12:15.293 }, 00:12:15.293 { 00:12:15.293 "name": "BaseBdev4", 00:12:15.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.293 "is_configured": false, 00:12:15.293 "data_offset": 0, 00:12:15.293 "data_size": 0 00:12:15.293 } 00:12:15.293 ] 00:12:15.293 }' 00:12:15.293 12:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.293 12:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.864 [2024-11-06 12:43:04.336912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.864 BaseBdev3 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.864 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.864 [ 00:12:15.864 { 00:12:15.864 "name": "BaseBdev3", 00:12:15.864 "aliases": [ 00:12:15.864 "8230aafc-75a0-4ff3-bd35-24529029a8fd" 00:12:15.864 ], 00:12:15.864 "product_name": "Malloc disk", 00:12:15.865 "block_size": 512, 00:12:15.865 "num_blocks": 65536, 00:12:15.865 "uuid": "8230aafc-75a0-4ff3-bd35-24529029a8fd", 00:12:15.865 "assigned_rate_limits": { 00:12:15.865 "rw_ios_per_sec": 0, 00:12:15.865 "rw_mbytes_per_sec": 0, 00:12:15.865 "r_mbytes_per_sec": 0, 00:12:15.865 "w_mbytes_per_sec": 0 00:12:15.865 }, 00:12:15.865 "claimed": true, 00:12:15.865 "claim_type": "exclusive_write", 00:12:15.865 "zoned": false, 00:12:15.865 "supported_io_types": { 00:12:15.865 "read": true, 00:12:15.865 "write": true, 00:12:15.865 "unmap": true, 00:12:15.865 "flush": true, 00:12:15.865 "reset": true, 00:12:15.865 "nvme_admin": false, 00:12:15.865 "nvme_io": false, 00:12:15.865 "nvme_io_md": false, 00:12:15.865 "write_zeroes": true, 00:12:15.865 "zcopy": true, 00:12:15.865 "get_zone_info": false, 00:12:15.865 "zone_management": false, 00:12:15.865 "zone_append": false, 00:12:15.865 "compare": false, 00:12:15.865 "compare_and_write": false, 00:12:15.865 "abort": true, 00:12:15.865 "seek_hole": false, 00:12:15.865 "seek_data": false, 00:12:15.865 "copy": true, 00:12:15.865 "nvme_iov_md": false 00:12:15.865 }, 00:12:15.865 "memory_domains": [ 00:12:15.865 { 00:12:15.865 "dma_device_id": "system", 00:12:15.865 "dma_device_type": 1 00:12:15.865 }, 00:12:15.865 { 00:12:15.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.865 "dma_device_type": 2 00:12:15.865 } 00:12:15.865 ], 00:12:15.865 "driver_specific": {} 00:12:15.865 } 00:12:15.865 ] 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.865 "name": "Existed_Raid", 00:12:15.865 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:15.865 "strip_size_kb": 0, 00:12:15.865 "state": "configuring", 00:12:15.865 "raid_level": "raid1", 00:12:15.865 "superblock": true, 00:12:15.865 "num_base_bdevs": 4, 00:12:15.865 "num_base_bdevs_discovered": 3, 00:12:15.865 "num_base_bdevs_operational": 4, 00:12:15.865 "base_bdevs_list": [ 00:12:15.865 { 00:12:15.865 "name": "BaseBdev1", 00:12:15.865 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:15.865 "is_configured": true, 00:12:15.865 "data_offset": 2048, 00:12:15.865 "data_size": 63488 00:12:15.865 }, 00:12:15.865 { 00:12:15.865 "name": "BaseBdev2", 00:12:15.865 "uuid": "eb47159a-4578-49eb-b28b-35e9d2386a5b", 00:12:15.865 "is_configured": true, 00:12:15.865 "data_offset": 2048, 00:12:15.865 "data_size": 63488 00:12:15.865 }, 00:12:15.865 { 00:12:15.865 "name": "BaseBdev3", 00:12:15.865 "uuid": "8230aafc-75a0-4ff3-bd35-24529029a8fd", 00:12:15.865 "is_configured": true, 00:12:15.865 "data_offset": 2048, 00:12:15.865 "data_size": 63488 00:12:15.865 }, 00:12:15.865 { 00:12:15.865 "name": "BaseBdev4", 00:12:15.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.865 "is_configured": false, 00:12:15.865 "data_offset": 0, 00:12:15.865 "data_size": 0 00:12:15.865 } 00:12:15.865 ] 00:12:15.865 }' 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.865 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:16.432 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.432 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 [2024-11-06 12:43:04.947624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.433 [2024-11-06 12:43:04.948117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.433 [2024-11-06 12:43:04.948144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.433 BaseBdev4 00:12:16.433 [2024-11-06 12:43:04.948527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:16.433 [2024-11-06 12:43:04.948737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.433 [2024-11-06 12:43:04.948768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.433 [2024-11-06 12:43:04.948946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 [ 00:12:16.433 { 00:12:16.433 "name": "BaseBdev4", 00:12:16.433 "aliases": [ 00:12:16.433 "0dd950d1-dba5-46b7-92cb-b5f9a5c506de" 00:12:16.433 ], 00:12:16.433 "product_name": "Malloc disk", 00:12:16.433 "block_size": 512, 00:12:16.433 "num_blocks": 65536, 00:12:16.433 "uuid": "0dd950d1-dba5-46b7-92cb-b5f9a5c506de", 00:12:16.433 "assigned_rate_limits": { 00:12:16.433 "rw_ios_per_sec": 0, 00:12:16.433 "rw_mbytes_per_sec": 0, 00:12:16.433 "r_mbytes_per_sec": 0, 00:12:16.433 "w_mbytes_per_sec": 0 00:12:16.433 }, 00:12:16.433 "claimed": true, 00:12:16.433 "claim_type": "exclusive_write", 00:12:16.433 "zoned": false, 00:12:16.433 "supported_io_types": { 00:12:16.433 "read": true, 00:12:16.433 "write": true, 00:12:16.433 "unmap": true, 00:12:16.433 "flush": true, 00:12:16.433 "reset": true, 00:12:16.433 "nvme_admin": false, 00:12:16.433 "nvme_io": false, 00:12:16.433 "nvme_io_md": false, 00:12:16.433 "write_zeroes": true, 00:12:16.433 "zcopy": true, 00:12:16.433 "get_zone_info": false, 00:12:16.433 "zone_management": false, 00:12:16.433 "zone_append": false, 00:12:16.433 "compare": false, 00:12:16.433 "compare_and_write": false, 00:12:16.433 "abort": true, 00:12:16.433 "seek_hole": false, 00:12:16.433 "seek_data": false, 00:12:16.433 "copy": true, 00:12:16.433 "nvme_iov_md": false 00:12:16.433 }, 00:12:16.433 "memory_domains": [ 00:12:16.433 { 00:12:16.433 "dma_device_id": "system", 00:12:16.433 "dma_device_type": 1 00:12:16.433 }, 00:12:16.433 { 00:12:16.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.433 "dma_device_type": 2 00:12:16.433 } 00:12:16.433 ], 00:12:16.433 "driver_specific": {} 00:12:16.433 } 00:12:16.433 ] 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.433 12:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.433 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.433 "name": "Existed_Raid", 00:12:16.433 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:16.433 "strip_size_kb": 0, 00:12:16.433 "state": "online", 00:12:16.433 "raid_level": "raid1", 00:12:16.433 "superblock": true, 00:12:16.433 "num_base_bdevs": 4, 00:12:16.433 "num_base_bdevs_discovered": 4, 00:12:16.433 "num_base_bdevs_operational": 4, 00:12:16.433 "base_bdevs_list": [ 00:12:16.433 { 00:12:16.433 "name": "BaseBdev1", 00:12:16.433 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:16.433 "is_configured": true, 00:12:16.433 "data_offset": 2048, 00:12:16.433 "data_size": 63488 00:12:16.433 }, 00:12:16.433 { 00:12:16.433 "name": "BaseBdev2", 00:12:16.433 "uuid": "eb47159a-4578-49eb-b28b-35e9d2386a5b", 00:12:16.433 "is_configured": true, 00:12:16.433 "data_offset": 2048, 00:12:16.433 "data_size": 63488 00:12:16.433 }, 00:12:16.433 { 00:12:16.433 "name": "BaseBdev3", 00:12:16.433 "uuid": "8230aafc-75a0-4ff3-bd35-24529029a8fd", 00:12:16.433 "is_configured": true, 00:12:16.433 "data_offset": 2048, 00:12:16.433 "data_size": 63488 00:12:16.433 }, 00:12:16.433 { 00:12:16.433 "name": "BaseBdev4", 00:12:16.433 "uuid": "0dd950d1-dba5-46b7-92cb-b5f9a5c506de", 00:12:16.433 "is_configured": true, 00:12:16.433 "data_offset": 2048, 00:12:16.433 "data_size": 63488 00:12:16.433 } 00:12:16.433 ] 00:12:16.433 }' 00:12:16.433 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.433 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.000 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.001 [2024-11-06 12:43:05.508577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.001 "name": "Existed_Raid", 00:12:17.001 "aliases": [ 00:12:17.001 "20690e7a-0caa-4e94-88c7-fc8be4417601" 00:12:17.001 ], 00:12:17.001 "product_name": "Raid Volume", 00:12:17.001 "block_size": 512, 00:12:17.001 "num_blocks": 63488, 00:12:17.001 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:17.001 "assigned_rate_limits": { 00:12:17.001 "rw_ios_per_sec": 0, 00:12:17.001 "rw_mbytes_per_sec": 0, 00:12:17.001 "r_mbytes_per_sec": 0, 00:12:17.001 "w_mbytes_per_sec": 0 00:12:17.001 }, 00:12:17.001 "claimed": false, 00:12:17.001 "zoned": false, 00:12:17.001 "supported_io_types": { 00:12:17.001 "read": true, 00:12:17.001 "write": true, 00:12:17.001 "unmap": false, 00:12:17.001 "flush": false, 00:12:17.001 "reset": true, 00:12:17.001 "nvme_admin": false, 00:12:17.001 "nvme_io": false, 00:12:17.001 "nvme_io_md": false, 00:12:17.001 "write_zeroes": true, 00:12:17.001 "zcopy": false, 00:12:17.001 "get_zone_info": false, 00:12:17.001 "zone_management": false, 00:12:17.001 "zone_append": false, 00:12:17.001 "compare": false, 00:12:17.001 "compare_and_write": false, 00:12:17.001 "abort": false, 00:12:17.001 "seek_hole": false, 00:12:17.001 "seek_data": false, 00:12:17.001 "copy": false, 00:12:17.001 "nvme_iov_md": false 00:12:17.001 }, 00:12:17.001 "memory_domains": [ 00:12:17.001 { 00:12:17.001 "dma_device_id": "system", 00:12:17.001 "dma_device_type": 1 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.001 "dma_device_type": 2 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "system", 00:12:17.001 "dma_device_type": 1 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.001 "dma_device_type": 2 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "system", 00:12:17.001 "dma_device_type": 1 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.001 "dma_device_type": 2 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "system", 00:12:17.001 "dma_device_type": 1 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.001 "dma_device_type": 2 00:12:17.001 } 00:12:17.001 ], 00:12:17.001 "driver_specific": { 00:12:17.001 "raid": { 00:12:17.001 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:17.001 "strip_size_kb": 0, 00:12:17.001 "state": "online", 00:12:17.001 "raid_level": "raid1", 00:12:17.001 "superblock": true, 00:12:17.001 "num_base_bdevs": 4, 00:12:17.001 "num_base_bdevs_discovered": 4, 00:12:17.001 "num_base_bdevs_operational": 4, 00:12:17.001 "base_bdevs_list": [ 00:12:17.001 { 00:12:17.001 "name": "BaseBdev1", 00:12:17.001 "uuid": "8fbc234b-82a2-4699-84ef-00343aa1073b", 00:12:17.001 "is_configured": true, 00:12:17.001 "data_offset": 2048, 00:12:17.001 "data_size": 63488 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "name": "BaseBdev2", 00:12:17.001 "uuid": "eb47159a-4578-49eb-b28b-35e9d2386a5b", 00:12:17.001 "is_configured": true, 00:12:17.001 "data_offset": 2048, 00:12:17.001 "data_size": 63488 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "name": "BaseBdev3", 00:12:17.001 "uuid": "8230aafc-75a0-4ff3-bd35-24529029a8fd", 00:12:17.001 "is_configured": true, 00:12:17.001 "data_offset": 2048, 00:12:17.001 "data_size": 63488 00:12:17.001 }, 00:12:17.001 { 00:12:17.001 "name": "BaseBdev4", 00:12:17.001 "uuid": "0dd950d1-dba5-46b7-92cb-b5f9a5c506de", 00:12:17.001 "is_configured": true, 00:12:17.001 "data_offset": 2048, 00:12:17.001 "data_size": 63488 00:12:17.001 } 00:12:17.001 ] 00:12:17.001 } 00:12:17.001 } 00:12:17.001 }' 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.001 BaseBdev2 00:12:17.001 BaseBdev3 00:12:17.001 BaseBdev4' 00:12:17.001 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.260 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.260 [2024-11-06 12:43:05.896073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.519 12:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.519 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.519 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.519 "name": "Existed_Raid", 00:12:17.519 "uuid": "20690e7a-0caa-4e94-88c7-fc8be4417601", 00:12:17.519 "strip_size_kb": 0, 00:12:17.519 "state": "online", 00:12:17.519 "raid_level": "raid1", 00:12:17.519 "superblock": true, 00:12:17.519 "num_base_bdevs": 4, 00:12:17.519 "num_base_bdevs_discovered": 3, 00:12:17.519 "num_base_bdevs_operational": 3, 00:12:17.519 "base_bdevs_list": [ 00:12:17.519 { 00:12:17.519 "name": null, 00:12:17.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.519 "is_configured": false, 00:12:17.519 "data_offset": 0, 00:12:17.519 "data_size": 63488 00:12:17.519 }, 00:12:17.519 { 00:12:17.519 "name": "BaseBdev2", 00:12:17.519 "uuid": "eb47159a-4578-49eb-b28b-35e9d2386a5b", 00:12:17.519 "is_configured": true, 00:12:17.519 "data_offset": 2048, 00:12:17.519 "data_size": 63488 00:12:17.519 }, 00:12:17.519 { 00:12:17.519 "name": "BaseBdev3", 00:12:17.519 "uuid": "8230aafc-75a0-4ff3-bd35-24529029a8fd", 00:12:17.519 "is_configured": true, 00:12:17.519 "data_offset": 2048, 00:12:17.519 "data_size": 63488 00:12:17.519 }, 00:12:17.519 { 00:12:17.519 "name": "BaseBdev4", 00:12:17.519 "uuid": "0dd950d1-dba5-46b7-92cb-b5f9a5c506de", 00:12:17.519 "is_configured": true, 00:12:17.519 "data_offset": 2048, 00:12:17.519 "data_size": 63488 00:12:17.519 } 00:12:17.519 ] 00:12:17.519 }' 00:12:17.519 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.519 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.086 [2024-11-06 12:43:06.538957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.086 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.086 [2024-11-06 12:43:06.684429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.362 [2024-11-06 12:43:06.827969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:18.362 [2024-11-06 12:43:06.828107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.362 [2024-11-06 12:43:06.916356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.362 [2024-11-06 12:43:06.916686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.362 [2024-11-06 12:43:06.916721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.362 12:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.621 BaseBdev2 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.621 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.621 [ 00:12:18.621 { 00:12:18.621 "name": "BaseBdev2", 00:12:18.621 "aliases": [ 00:12:18.621 "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d" 00:12:18.621 ], 00:12:18.621 "product_name": "Malloc disk", 00:12:18.621 "block_size": 512, 00:12:18.621 "num_blocks": 65536, 00:12:18.621 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:18.621 "assigned_rate_limits": { 00:12:18.621 "rw_ios_per_sec": 0, 00:12:18.621 "rw_mbytes_per_sec": 0, 00:12:18.621 "r_mbytes_per_sec": 0, 00:12:18.621 "w_mbytes_per_sec": 0 00:12:18.621 }, 00:12:18.621 "claimed": false, 00:12:18.621 "zoned": false, 00:12:18.621 "supported_io_types": { 00:12:18.621 "read": true, 00:12:18.621 "write": true, 00:12:18.621 "unmap": true, 00:12:18.621 "flush": true, 00:12:18.621 "reset": true, 00:12:18.621 "nvme_admin": false, 00:12:18.621 "nvme_io": false, 00:12:18.621 "nvme_io_md": false, 00:12:18.621 "write_zeroes": true, 00:12:18.621 "zcopy": true, 00:12:18.621 "get_zone_info": false, 00:12:18.621 "zone_management": false, 00:12:18.621 "zone_append": false, 00:12:18.621 "compare": false, 00:12:18.621 "compare_and_write": false, 00:12:18.621 "abort": true, 00:12:18.621 "seek_hole": false, 00:12:18.621 "seek_data": false, 00:12:18.621 "copy": true, 00:12:18.621 "nvme_iov_md": false 00:12:18.621 }, 00:12:18.621 "memory_domains": [ 00:12:18.621 { 00:12:18.621 "dma_device_id": "system", 00:12:18.621 "dma_device_type": 1 00:12:18.622 }, 00:12:18.622 { 00:12:18.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.622 "dma_device_type": 2 00:12:18.622 } 00:12:18.622 ], 00:12:18.622 "driver_specific": {} 00:12:18.622 } 00:12:18.622 ] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.622 BaseBdev3 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.622 [ 00:12:18.622 { 00:12:18.622 "name": "BaseBdev3", 00:12:18.622 "aliases": [ 00:12:18.622 "28e8969f-38a1-4bf8-bb2e-132dc2373d60" 00:12:18.622 ], 00:12:18.622 "product_name": "Malloc disk", 00:12:18.622 "block_size": 512, 00:12:18.622 "num_blocks": 65536, 00:12:18.622 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:18.622 "assigned_rate_limits": { 00:12:18.622 "rw_ios_per_sec": 0, 00:12:18.622 "rw_mbytes_per_sec": 0, 00:12:18.622 "r_mbytes_per_sec": 0, 00:12:18.622 "w_mbytes_per_sec": 0 00:12:18.622 }, 00:12:18.622 "claimed": false, 00:12:18.622 "zoned": false, 00:12:18.622 "supported_io_types": { 00:12:18.622 "read": true, 00:12:18.622 "write": true, 00:12:18.622 "unmap": true, 00:12:18.622 "flush": true, 00:12:18.622 "reset": true, 00:12:18.622 "nvme_admin": false, 00:12:18.622 "nvme_io": false, 00:12:18.622 "nvme_io_md": false, 00:12:18.622 "write_zeroes": true, 00:12:18.622 "zcopy": true, 00:12:18.622 "get_zone_info": false, 00:12:18.622 "zone_management": false, 00:12:18.622 "zone_append": false, 00:12:18.622 "compare": false, 00:12:18.622 "compare_and_write": false, 00:12:18.622 "abort": true, 00:12:18.622 "seek_hole": false, 00:12:18.622 "seek_data": false, 00:12:18.622 "copy": true, 00:12:18.622 "nvme_iov_md": false 00:12:18.622 }, 00:12:18.622 "memory_domains": [ 00:12:18.622 { 00:12:18.622 "dma_device_id": "system", 00:12:18.622 "dma_device_type": 1 00:12:18.622 }, 00:12:18.622 { 00:12:18.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.622 "dma_device_type": 2 00:12:18.622 } 00:12:18.622 ], 00:12:18.622 "driver_specific": {} 00:12:18.622 } 00:12:18.622 ] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.622 BaseBdev4 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.622 [ 00:12:18.622 { 00:12:18.622 "name": "BaseBdev4", 00:12:18.622 "aliases": [ 00:12:18.622 "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0" 00:12:18.622 ], 00:12:18.622 "product_name": "Malloc disk", 00:12:18.622 "block_size": 512, 00:12:18.622 "num_blocks": 65536, 00:12:18.622 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:18.622 "assigned_rate_limits": { 00:12:18.622 "rw_ios_per_sec": 0, 00:12:18.622 "rw_mbytes_per_sec": 0, 00:12:18.622 "r_mbytes_per_sec": 0, 00:12:18.622 "w_mbytes_per_sec": 0 00:12:18.622 }, 00:12:18.622 "claimed": false, 00:12:18.622 "zoned": false, 00:12:18.622 "supported_io_types": { 00:12:18.622 "read": true, 00:12:18.622 "write": true, 00:12:18.622 "unmap": true, 00:12:18.622 "flush": true, 00:12:18.622 "reset": true, 00:12:18.622 "nvme_admin": false, 00:12:18.622 "nvme_io": false, 00:12:18.622 "nvme_io_md": false, 00:12:18.622 "write_zeroes": true, 00:12:18.622 "zcopy": true, 00:12:18.622 "get_zone_info": false, 00:12:18.622 "zone_management": false, 00:12:18.622 "zone_append": false, 00:12:18.622 "compare": false, 00:12:18.622 "compare_and_write": false, 00:12:18.622 "abort": true, 00:12:18.622 "seek_hole": false, 00:12:18.622 "seek_data": false, 00:12:18.622 "copy": true, 00:12:18.622 "nvme_iov_md": false 00:12:18.622 }, 00:12:18.622 "memory_domains": [ 00:12:18.622 { 00:12:18.622 "dma_device_id": "system", 00:12:18.622 "dma_device_type": 1 00:12:18.622 }, 00:12:18.622 { 00:12:18.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.622 "dma_device_type": 2 00:12:18.622 } 00:12:18.622 ], 00:12:18.622 "driver_specific": {} 00:12:18.622 } 00:12:18.622 ] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.622 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.623 [2024-11-06 12:43:07.220526] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.623 [2024-11-06 12:43:07.220735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.623 [2024-11-06 12:43:07.220779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.623 [2024-11-06 12:43:07.223238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.623 [2024-11-06 12:43:07.223305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.623 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.946 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.946 "name": "Existed_Raid", 00:12:18.946 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:18.946 "strip_size_kb": 0, 00:12:18.946 "state": "configuring", 00:12:18.946 "raid_level": "raid1", 00:12:18.946 "superblock": true, 00:12:18.946 "num_base_bdevs": 4, 00:12:18.946 "num_base_bdevs_discovered": 3, 00:12:18.946 "num_base_bdevs_operational": 4, 00:12:18.946 "base_bdevs_list": [ 00:12:18.946 { 00:12:18.946 "name": "BaseBdev1", 00:12:18.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.946 "is_configured": false, 00:12:18.946 "data_offset": 0, 00:12:18.946 "data_size": 0 00:12:18.946 }, 00:12:18.946 { 00:12:18.946 "name": "BaseBdev2", 00:12:18.946 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:18.946 "is_configured": true, 00:12:18.946 "data_offset": 2048, 00:12:18.946 "data_size": 63488 00:12:18.946 }, 00:12:18.946 { 00:12:18.946 "name": "BaseBdev3", 00:12:18.946 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:18.946 "is_configured": true, 00:12:18.946 "data_offset": 2048, 00:12:18.946 "data_size": 63488 00:12:18.946 }, 00:12:18.946 { 00:12:18.946 "name": "BaseBdev4", 00:12:18.946 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:18.946 "is_configured": true, 00:12:18.946 "data_offset": 2048, 00:12:18.946 "data_size": 63488 00:12:18.946 } 00:12:18.946 ] 00:12:18.946 }' 00:12:18.946 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.946 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.205 [2024-11-06 12:43:07.728697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.205 "name": "Existed_Raid", 00:12:19.205 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:19.205 "strip_size_kb": 0, 00:12:19.205 "state": "configuring", 00:12:19.205 "raid_level": "raid1", 00:12:19.205 "superblock": true, 00:12:19.205 "num_base_bdevs": 4, 00:12:19.205 "num_base_bdevs_discovered": 2, 00:12:19.205 "num_base_bdevs_operational": 4, 00:12:19.205 "base_bdevs_list": [ 00:12:19.205 { 00:12:19.205 "name": "BaseBdev1", 00:12:19.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.205 "is_configured": false, 00:12:19.205 "data_offset": 0, 00:12:19.205 "data_size": 0 00:12:19.205 }, 00:12:19.205 { 00:12:19.205 "name": null, 00:12:19.205 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:19.205 "is_configured": false, 00:12:19.205 "data_offset": 0, 00:12:19.205 "data_size": 63488 00:12:19.205 }, 00:12:19.205 { 00:12:19.205 "name": "BaseBdev3", 00:12:19.205 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:19.205 "is_configured": true, 00:12:19.205 "data_offset": 2048, 00:12:19.205 "data_size": 63488 00:12:19.205 }, 00:12:19.205 { 00:12:19.205 "name": "BaseBdev4", 00:12:19.205 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:19.205 "is_configured": true, 00:12:19.205 "data_offset": 2048, 00:12:19.205 "data_size": 63488 00:12:19.205 } 00:12:19.205 ] 00:12:19.205 }' 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.205 12:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 [2024-11-06 12:43:08.370725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.805 BaseBdev1 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 [ 00:12:19.805 { 00:12:19.805 "name": "BaseBdev1", 00:12:19.805 "aliases": [ 00:12:19.805 "ea337ca1-225f-4b63-a71c-7813032e087f" 00:12:19.805 ], 00:12:19.805 "product_name": "Malloc disk", 00:12:19.805 "block_size": 512, 00:12:19.805 "num_blocks": 65536, 00:12:19.805 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:19.805 "assigned_rate_limits": { 00:12:19.805 "rw_ios_per_sec": 0, 00:12:19.805 "rw_mbytes_per_sec": 0, 00:12:19.805 "r_mbytes_per_sec": 0, 00:12:19.805 "w_mbytes_per_sec": 0 00:12:19.805 }, 00:12:19.805 "claimed": true, 00:12:19.805 "claim_type": "exclusive_write", 00:12:19.805 "zoned": false, 00:12:19.805 "supported_io_types": { 00:12:19.805 "read": true, 00:12:19.805 "write": true, 00:12:19.805 "unmap": true, 00:12:19.805 "flush": true, 00:12:19.805 "reset": true, 00:12:19.805 "nvme_admin": false, 00:12:19.805 "nvme_io": false, 00:12:19.805 "nvme_io_md": false, 00:12:19.805 "write_zeroes": true, 00:12:19.805 "zcopy": true, 00:12:19.805 "get_zone_info": false, 00:12:19.805 "zone_management": false, 00:12:19.805 "zone_append": false, 00:12:19.805 "compare": false, 00:12:19.805 "compare_and_write": false, 00:12:19.805 "abort": true, 00:12:19.805 "seek_hole": false, 00:12:19.805 "seek_data": false, 00:12:19.805 "copy": true, 00:12:19.805 "nvme_iov_md": false 00:12:19.805 }, 00:12:19.805 "memory_domains": [ 00:12:19.805 { 00:12:19.805 "dma_device_id": "system", 00:12:19.805 "dma_device_type": 1 00:12:19.805 }, 00:12:19.805 { 00:12:19.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.805 "dma_device_type": 2 00:12:19.805 } 00:12:19.805 ], 00:12:19.805 "driver_specific": {} 00:12:19.805 } 00:12:19.805 ] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.805 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.064 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.064 "name": "Existed_Raid", 00:12:20.064 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:20.064 "strip_size_kb": 0, 00:12:20.064 "state": "configuring", 00:12:20.064 "raid_level": "raid1", 00:12:20.064 "superblock": true, 00:12:20.064 "num_base_bdevs": 4, 00:12:20.064 "num_base_bdevs_discovered": 3, 00:12:20.064 "num_base_bdevs_operational": 4, 00:12:20.064 "base_bdevs_list": [ 00:12:20.064 { 00:12:20.064 "name": "BaseBdev1", 00:12:20.064 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:20.064 "is_configured": true, 00:12:20.064 "data_offset": 2048, 00:12:20.064 "data_size": 63488 00:12:20.064 }, 00:12:20.064 { 00:12:20.064 "name": null, 00:12:20.064 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:20.064 "is_configured": false, 00:12:20.064 "data_offset": 0, 00:12:20.064 "data_size": 63488 00:12:20.064 }, 00:12:20.064 { 00:12:20.064 "name": "BaseBdev3", 00:12:20.064 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:20.064 "is_configured": true, 00:12:20.064 "data_offset": 2048, 00:12:20.064 "data_size": 63488 00:12:20.064 }, 00:12:20.064 { 00:12:20.064 "name": "BaseBdev4", 00:12:20.064 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:20.064 "is_configured": true, 00:12:20.064 "data_offset": 2048, 00:12:20.064 "data_size": 63488 00:12:20.064 } 00:12:20.064 ] 00:12:20.064 }' 00:12:20.064 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.064 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.324 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.324 [2024-11-06 12:43:08.979021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.584 12:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.584 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.584 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.584 "name": "Existed_Raid", 00:12:20.584 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:20.584 "strip_size_kb": 0, 00:12:20.584 "state": "configuring", 00:12:20.584 "raid_level": "raid1", 00:12:20.584 "superblock": true, 00:12:20.584 "num_base_bdevs": 4, 00:12:20.584 "num_base_bdevs_discovered": 2, 00:12:20.584 "num_base_bdevs_operational": 4, 00:12:20.584 "base_bdevs_list": [ 00:12:20.584 { 00:12:20.584 "name": "BaseBdev1", 00:12:20.584 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:20.584 "is_configured": true, 00:12:20.584 "data_offset": 2048, 00:12:20.584 "data_size": 63488 00:12:20.584 }, 00:12:20.584 { 00:12:20.584 "name": null, 00:12:20.584 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:20.584 "is_configured": false, 00:12:20.584 "data_offset": 0, 00:12:20.584 "data_size": 63488 00:12:20.584 }, 00:12:20.584 { 00:12:20.584 "name": null, 00:12:20.584 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:20.584 "is_configured": false, 00:12:20.584 "data_offset": 0, 00:12:20.584 "data_size": 63488 00:12:20.584 }, 00:12:20.584 { 00:12:20.584 "name": "BaseBdev4", 00:12:20.584 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:20.584 "is_configured": true, 00:12:20.584 "data_offset": 2048, 00:12:20.584 "data_size": 63488 00:12:20.584 } 00:12:20.584 ] 00:12:20.584 }' 00:12:20.584 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.584 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.843 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.843 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.843 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.843 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.102 [2024-11-06 12:43:09.535157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.102 "name": "Existed_Raid", 00:12:21.102 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:21.102 "strip_size_kb": 0, 00:12:21.102 "state": "configuring", 00:12:21.102 "raid_level": "raid1", 00:12:21.102 "superblock": true, 00:12:21.102 "num_base_bdevs": 4, 00:12:21.102 "num_base_bdevs_discovered": 3, 00:12:21.102 "num_base_bdevs_operational": 4, 00:12:21.102 "base_bdevs_list": [ 00:12:21.102 { 00:12:21.102 "name": "BaseBdev1", 00:12:21.102 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:21.102 "is_configured": true, 00:12:21.102 "data_offset": 2048, 00:12:21.102 "data_size": 63488 00:12:21.102 }, 00:12:21.102 { 00:12:21.102 "name": null, 00:12:21.102 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:21.102 "is_configured": false, 00:12:21.102 "data_offset": 0, 00:12:21.102 "data_size": 63488 00:12:21.102 }, 00:12:21.102 { 00:12:21.102 "name": "BaseBdev3", 00:12:21.102 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:21.102 "is_configured": true, 00:12:21.102 "data_offset": 2048, 00:12:21.102 "data_size": 63488 00:12:21.102 }, 00:12:21.102 { 00:12:21.102 "name": "BaseBdev4", 00:12:21.102 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:21.102 "is_configured": true, 00:12:21.102 "data_offset": 2048, 00:12:21.102 "data_size": 63488 00:12:21.102 } 00:12:21.102 ] 00:12:21.102 }' 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.102 12:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.669 [2024-11-06 12:43:10.071360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.669 "name": "Existed_Raid", 00:12:21.669 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:21.669 "strip_size_kb": 0, 00:12:21.669 "state": "configuring", 00:12:21.669 "raid_level": "raid1", 00:12:21.669 "superblock": true, 00:12:21.669 "num_base_bdevs": 4, 00:12:21.669 "num_base_bdevs_discovered": 2, 00:12:21.669 "num_base_bdevs_operational": 4, 00:12:21.669 "base_bdevs_list": [ 00:12:21.669 { 00:12:21.669 "name": null, 00:12:21.669 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:21.669 "is_configured": false, 00:12:21.669 "data_offset": 0, 00:12:21.669 "data_size": 63488 00:12:21.669 }, 00:12:21.669 { 00:12:21.669 "name": null, 00:12:21.669 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:21.669 "is_configured": false, 00:12:21.669 "data_offset": 0, 00:12:21.669 "data_size": 63488 00:12:21.669 }, 00:12:21.669 { 00:12:21.669 "name": "BaseBdev3", 00:12:21.669 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:21.669 "is_configured": true, 00:12:21.669 "data_offset": 2048, 00:12:21.669 "data_size": 63488 00:12:21.669 }, 00:12:21.669 { 00:12:21.669 "name": "BaseBdev4", 00:12:21.669 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:21.669 "is_configured": true, 00:12:21.669 "data_offset": 2048, 00:12:21.669 "data_size": 63488 00:12:21.669 } 00:12:21.669 ] 00:12:21.669 }' 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.669 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.239 [2024-11-06 12:43:10.716667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.239 "name": "Existed_Raid", 00:12:22.239 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:22.239 "strip_size_kb": 0, 00:12:22.239 "state": "configuring", 00:12:22.239 "raid_level": "raid1", 00:12:22.239 "superblock": true, 00:12:22.239 "num_base_bdevs": 4, 00:12:22.239 "num_base_bdevs_discovered": 3, 00:12:22.239 "num_base_bdevs_operational": 4, 00:12:22.239 "base_bdevs_list": [ 00:12:22.239 { 00:12:22.239 "name": null, 00:12:22.239 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:22.239 "is_configured": false, 00:12:22.239 "data_offset": 0, 00:12:22.239 "data_size": 63488 00:12:22.239 }, 00:12:22.239 { 00:12:22.239 "name": "BaseBdev2", 00:12:22.239 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:22.239 "is_configured": true, 00:12:22.239 "data_offset": 2048, 00:12:22.239 "data_size": 63488 00:12:22.239 }, 00:12:22.239 { 00:12:22.239 "name": "BaseBdev3", 00:12:22.239 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:22.239 "is_configured": true, 00:12:22.239 "data_offset": 2048, 00:12:22.239 "data_size": 63488 00:12:22.239 }, 00:12:22.239 { 00:12:22.239 "name": "BaseBdev4", 00:12:22.239 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:22.239 "is_configured": true, 00:12:22.239 "data_offset": 2048, 00:12:22.239 "data_size": 63488 00:12:22.239 } 00:12:22.239 ] 00:12:22.239 }' 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.239 12:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ea337ca1-225f-4b63-a71c-7813032e087f 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 [2024-11-06 12:43:11.378844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:22.807 [2024-11-06 12:43:11.379138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:22.807 [2024-11-06 12:43:11.379165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.807 NewBaseBdev 00:12:22.807 [2024-11-06 12:43:11.379538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:22.807 [2024-11-06 12:43:11.379738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:22.807 [2024-11-06 12:43:11.379754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:22.807 [2024-11-06 12:43:11.379915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 [ 00:12:22.807 { 00:12:22.807 "name": "NewBaseBdev", 00:12:22.807 "aliases": [ 00:12:22.807 "ea337ca1-225f-4b63-a71c-7813032e087f" 00:12:22.807 ], 00:12:22.807 "product_name": "Malloc disk", 00:12:22.807 "block_size": 512, 00:12:22.807 "num_blocks": 65536, 00:12:22.807 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:22.807 "assigned_rate_limits": { 00:12:22.807 "rw_ios_per_sec": 0, 00:12:22.807 "rw_mbytes_per_sec": 0, 00:12:22.807 "r_mbytes_per_sec": 0, 00:12:22.807 "w_mbytes_per_sec": 0 00:12:22.807 }, 00:12:22.807 "claimed": true, 00:12:22.807 "claim_type": "exclusive_write", 00:12:22.807 "zoned": false, 00:12:22.807 "supported_io_types": { 00:12:22.807 "read": true, 00:12:22.807 "write": true, 00:12:22.807 "unmap": true, 00:12:22.807 "flush": true, 00:12:22.807 "reset": true, 00:12:22.807 "nvme_admin": false, 00:12:22.807 "nvme_io": false, 00:12:22.807 "nvme_io_md": false, 00:12:22.807 "write_zeroes": true, 00:12:22.807 "zcopy": true, 00:12:22.807 "get_zone_info": false, 00:12:22.807 "zone_management": false, 00:12:22.807 "zone_append": false, 00:12:22.807 "compare": false, 00:12:22.807 "compare_and_write": false, 00:12:22.807 "abort": true, 00:12:22.807 "seek_hole": false, 00:12:22.807 "seek_data": false, 00:12:22.807 "copy": true, 00:12:22.807 "nvme_iov_md": false 00:12:22.807 }, 00:12:22.807 "memory_domains": [ 00:12:22.807 { 00:12:22.807 "dma_device_id": "system", 00:12:22.807 "dma_device_type": 1 00:12:22.807 }, 00:12:22.807 { 00:12:22.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.807 "dma_device_type": 2 00:12:22.807 } 00:12:22.807 ], 00:12:22.807 "driver_specific": {} 00:12:22.807 } 00:12:22.807 ] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.807 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.066 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.066 "name": "Existed_Raid", 00:12:23.066 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:23.066 "strip_size_kb": 0, 00:12:23.066 "state": "online", 00:12:23.066 "raid_level": "raid1", 00:12:23.066 "superblock": true, 00:12:23.066 "num_base_bdevs": 4, 00:12:23.066 "num_base_bdevs_discovered": 4, 00:12:23.066 "num_base_bdevs_operational": 4, 00:12:23.066 "base_bdevs_list": [ 00:12:23.066 { 00:12:23.066 "name": "NewBaseBdev", 00:12:23.066 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:23.066 "is_configured": true, 00:12:23.066 "data_offset": 2048, 00:12:23.066 "data_size": 63488 00:12:23.066 }, 00:12:23.066 { 00:12:23.066 "name": "BaseBdev2", 00:12:23.066 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:23.066 "is_configured": true, 00:12:23.066 "data_offset": 2048, 00:12:23.066 "data_size": 63488 00:12:23.066 }, 00:12:23.066 { 00:12:23.066 "name": "BaseBdev3", 00:12:23.066 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:23.066 "is_configured": true, 00:12:23.066 "data_offset": 2048, 00:12:23.066 "data_size": 63488 00:12:23.066 }, 00:12:23.066 { 00:12:23.066 "name": "BaseBdev4", 00:12:23.066 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:23.066 "is_configured": true, 00:12:23.066 "data_offset": 2048, 00:12:23.066 "data_size": 63488 00:12:23.066 } 00:12:23.066 ] 00:12:23.066 }' 00:12:23.066 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.066 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.324 [2024-11-06 12:43:11.955520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.324 12:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.582 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.582 "name": "Existed_Raid", 00:12:23.582 "aliases": [ 00:12:23.582 "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7" 00:12:23.582 ], 00:12:23.582 "product_name": "Raid Volume", 00:12:23.582 "block_size": 512, 00:12:23.582 "num_blocks": 63488, 00:12:23.582 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:23.582 "assigned_rate_limits": { 00:12:23.582 "rw_ios_per_sec": 0, 00:12:23.582 "rw_mbytes_per_sec": 0, 00:12:23.582 "r_mbytes_per_sec": 0, 00:12:23.582 "w_mbytes_per_sec": 0 00:12:23.582 }, 00:12:23.582 "claimed": false, 00:12:23.582 "zoned": false, 00:12:23.582 "supported_io_types": { 00:12:23.582 "read": true, 00:12:23.582 "write": true, 00:12:23.582 "unmap": false, 00:12:23.582 "flush": false, 00:12:23.582 "reset": true, 00:12:23.582 "nvme_admin": false, 00:12:23.582 "nvme_io": false, 00:12:23.582 "nvme_io_md": false, 00:12:23.582 "write_zeroes": true, 00:12:23.582 "zcopy": false, 00:12:23.582 "get_zone_info": false, 00:12:23.582 "zone_management": false, 00:12:23.582 "zone_append": false, 00:12:23.582 "compare": false, 00:12:23.582 "compare_and_write": false, 00:12:23.582 "abort": false, 00:12:23.582 "seek_hole": false, 00:12:23.582 "seek_data": false, 00:12:23.582 "copy": false, 00:12:23.582 "nvme_iov_md": false 00:12:23.582 }, 00:12:23.582 "memory_domains": [ 00:12:23.582 { 00:12:23.582 "dma_device_id": "system", 00:12:23.582 "dma_device_type": 1 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.582 "dma_device_type": 2 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "system", 00:12:23.582 "dma_device_type": 1 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.582 "dma_device_type": 2 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "system", 00:12:23.582 "dma_device_type": 1 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.582 "dma_device_type": 2 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "system", 00:12:23.582 "dma_device_type": 1 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.582 "dma_device_type": 2 00:12:23.582 } 00:12:23.582 ], 00:12:23.582 "driver_specific": { 00:12:23.582 "raid": { 00:12:23.582 "uuid": "af1c6b4c-8d72-47c3-b3fd-1265ffcfb2c7", 00:12:23.582 "strip_size_kb": 0, 00:12:23.582 "state": "online", 00:12:23.582 "raid_level": "raid1", 00:12:23.582 "superblock": true, 00:12:23.582 "num_base_bdevs": 4, 00:12:23.582 "num_base_bdevs_discovered": 4, 00:12:23.582 "num_base_bdevs_operational": 4, 00:12:23.582 "base_bdevs_list": [ 00:12:23.582 { 00:12:23.582 "name": "NewBaseBdev", 00:12:23.582 "uuid": "ea337ca1-225f-4b63-a71c-7813032e087f", 00:12:23.582 "is_configured": true, 00:12:23.582 "data_offset": 2048, 00:12:23.582 "data_size": 63488 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "name": "BaseBdev2", 00:12:23.582 "uuid": "c5ca6aa2-ffd4-447f-8a31-17d82a4b531d", 00:12:23.582 "is_configured": true, 00:12:23.582 "data_offset": 2048, 00:12:23.582 "data_size": 63488 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "name": "BaseBdev3", 00:12:23.582 "uuid": "28e8969f-38a1-4bf8-bb2e-132dc2373d60", 00:12:23.582 "is_configured": true, 00:12:23.582 "data_offset": 2048, 00:12:23.582 "data_size": 63488 00:12:23.582 }, 00:12:23.582 { 00:12:23.582 "name": "BaseBdev4", 00:12:23.582 "uuid": "96682c0a-8ea9-4d5f-ae08-c9b1dc3da3e0", 00:12:23.582 "is_configured": true, 00:12:23.582 "data_offset": 2048, 00:12:23.582 "data_size": 63488 00:12:23.582 } 00:12:23.582 ] 00:12:23.582 } 00:12:23.582 } 00:12:23.582 }' 00:12:23.582 12:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.582 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:23.582 BaseBdev2 00:12:23.582 BaseBdev3 00:12:23.582 BaseBdev4' 00:12:23.582 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.583 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.840 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.841 [2024-11-06 12:43:12.311210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.841 [2024-11-06 12:43:12.311244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.841 [2024-11-06 12:43:12.311364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.841 [2024-11-06 12:43:12.311736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.841 [2024-11-06 12:43:12.311769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74025 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74025 ']' 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74025 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74025 00:12:23.841 killing process with pid 74025 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74025' 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74025 00:12:23.841 [2024-11-06 12:43:12.348147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.841 12:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74025 00:12:24.099 [2024-11-06 12:43:12.707764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.471 12:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:25.471 00:12:25.471 real 0m12.900s 00:12:25.471 user 0m21.352s 00:12:25.471 sys 0m1.847s 00:12:25.471 12:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.471 ************************************ 00:12:25.471 END TEST raid_state_function_test_sb 00:12:25.471 ************************************ 00:12:25.471 12:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.471 12:43:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:25.471 12:43:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:25.471 12:43:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.471 12:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.471 ************************************ 00:12:25.471 START TEST raid_superblock_test 00:12:25.471 ************************************ 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74708 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74708 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74708 ']' 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:25.471 12:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.471 [2024-11-06 12:43:13.904852] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:12:25.471 [2024-11-06 12:43:13.905236] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74708 ] 00:12:25.471 [2024-11-06 12:43:14.085776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.730 [2024-11-06 12:43:14.215890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.988 [2024-11-06 12:43:14.454450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.988 [2024-11-06 12:43:14.454743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.555 12:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.555 malloc1 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.555 [2024-11-06 12:43:15.022512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.555 [2024-11-06 12:43:15.022622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.555 [2024-11-06 12:43:15.022661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:26.555 [2024-11-06 12:43:15.022677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.555 [2024-11-06 12:43:15.025515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.555 [2024-11-06 12:43:15.025561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.555 pt1 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.555 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 malloc2 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 [2024-11-06 12:43:15.074602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.556 [2024-11-06 12:43:15.074692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.556 [2024-11-06 12:43:15.074728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:26.556 [2024-11-06 12:43:15.074743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.556 [2024-11-06 12:43:15.077465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.556 [2024-11-06 12:43:15.077511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.556 pt2 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 malloc3 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 [2024-11-06 12:43:15.140148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.556 [2024-11-06 12:43:15.140226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.556 [2024-11-06 12:43:15.140268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:26.556 [2024-11-06 12:43:15.140284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.556 [2024-11-06 12:43:15.142966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.556 [2024-11-06 12:43:15.143015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.556 pt3 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 malloc4 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 [2024-11-06 12:43:15.195871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.556 [2024-11-06 12:43:15.195939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.556 [2024-11-06 12:43:15.195969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:26.556 [2024-11-06 12:43:15.195983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.556 [2024-11-06 12:43:15.198717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.556 [2024-11-06 12:43:15.198763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.556 pt4 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.556 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.556 [2024-11-06 12:43:15.207896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.814 [2024-11-06 12:43:15.210278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.814 [2024-11-06 12:43:15.210373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.814 [2024-11-06 12:43:15.210445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.814 [2024-11-06 12:43:15.210687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:26.814 [2024-11-06 12:43:15.210713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.814 [2024-11-06 12:43:15.211052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:26.814 [2024-11-06 12:43:15.211291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:26.814 [2024-11-06 12:43:15.211318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:26.814 [2024-11-06 12:43:15.211521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.814 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.815 "name": "raid_bdev1", 00:12:26.815 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:26.815 "strip_size_kb": 0, 00:12:26.815 "state": "online", 00:12:26.815 "raid_level": "raid1", 00:12:26.815 "superblock": true, 00:12:26.815 "num_base_bdevs": 4, 00:12:26.815 "num_base_bdevs_discovered": 4, 00:12:26.815 "num_base_bdevs_operational": 4, 00:12:26.815 "base_bdevs_list": [ 00:12:26.815 { 00:12:26.815 "name": "pt1", 00:12:26.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.815 "is_configured": true, 00:12:26.815 "data_offset": 2048, 00:12:26.815 "data_size": 63488 00:12:26.815 }, 00:12:26.815 { 00:12:26.815 "name": "pt2", 00:12:26.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.815 "is_configured": true, 00:12:26.815 "data_offset": 2048, 00:12:26.815 "data_size": 63488 00:12:26.815 }, 00:12:26.815 { 00:12:26.815 "name": "pt3", 00:12:26.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.815 "is_configured": true, 00:12:26.815 "data_offset": 2048, 00:12:26.815 "data_size": 63488 00:12:26.815 }, 00:12:26.815 { 00:12:26.815 "name": "pt4", 00:12:26.815 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.815 "is_configured": true, 00:12:26.815 "data_offset": 2048, 00:12:26.815 "data_size": 63488 00:12:26.815 } 00:12:26.815 ] 00:12:26.815 }' 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.815 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.073 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.073 [2024-11-06 12:43:15.712457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.332 "name": "raid_bdev1", 00:12:27.332 "aliases": [ 00:12:27.332 "da7cec35-401e-48d4-915a-e99a1ba19494" 00:12:27.332 ], 00:12:27.332 "product_name": "Raid Volume", 00:12:27.332 "block_size": 512, 00:12:27.332 "num_blocks": 63488, 00:12:27.332 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:27.332 "assigned_rate_limits": { 00:12:27.332 "rw_ios_per_sec": 0, 00:12:27.332 "rw_mbytes_per_sec": 0, 00:12:27.332 "r_mbytes_per_sec": 0, 00:12:27.332 "w_mbytes_per_sec": 0 00:12:27.332 }, 00:12:27.332 "claimed": false, 00:12:27.332 "zoned": false, 00:12:27.332 "supported_io_types": { 00:12:27.332 "read": true, 00:12:27.332 "write": true, 00:12:27.332 "unmap": false, 00:12:27.332 "flush": false, 00:12:27.332 "reset": true, 00:12:27.332 "nvme_admin": false, 00:12:27.332 "nvme_io": false, 00:12:27.332 "nvme_io_md": false, 00:12:27.332 "write_zeroes": true, 00:12:27.332 "zcopy": false, 00:12:27.332 "get_zone_info": false, 00:12:27.332 "zone_management": false, 00:12:27.332 "zone_append": false, 00:12:27.332 "compare": false, 00:12:27.332 "compare_and_write": false, 00:12:27.332 "abort": false, 00:12:27.332 "seek_hole": false, 00:12:27.332 "seek_data": false, 00:12:27.332 "copy": false, 00:12:27.332 "nvme_iov_md": false 00:12:27.332 }, 00:12:27.332 "memory_domains": [ 00:12:27.332 { 00:12:27.332 "dma_device_id": "system", 00:12:27.332 "dma_device_type": 1 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.332 "dma_device_type": 2 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "system", 00:12:27.332 "dma_device_type": 1 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.332 "dma_device_type": 2 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "system", 00:12:27.332 "dma_device_type": 1 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.332 "dma_device_type": 2 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "system", 00:12:27.332 "dma_device_type": 1 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.332 "dma_device_type": 2 00:12:27.332 } 00:12:27.332 ], 00:12:27.332 "driver_specific": { 00:12:27.332 "raid": { 00:12:27.332 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:27.332 "strip_size_kb": 0, 00:12:27.332 "state": "online", 00:12:27.332 "raid_level": "raid1", 00:12:27.332 "superblock": true, 00:12:27.332 "num_base_bdevs": 4, 00:12:27.332 "num_base_bdevs_discovered": 4, 00:12:27.332 "num_base_bdevs_operational": 4, 00:12:27.332 "base_bdevs_list": [ 00:12:27.332 { 00:12:27.332 "name": "pt1", 00:12:27.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.332 "is_configured": true, 00:12:27.332 "data_offset": 2048, 00:12:27.332 "data_size": 63488 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "name": "pt2", 00:12:27.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.332 "is_configured": true, 00:12:27.332 "data_offset": 2048, 00:12:27.332 "data_size": 63488 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "name": "pt3", 00:12:27.332 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.332 "is_configured": true, 00:12:27.332 "data_offset": 2048, 00:12:27.332 "data_size": 63488 00:12:27.332 }, 00:12:27.332 { 00:12:27.332 "name": "pt4", 00:12:27.332 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.332 "is_configured": true, 00:12:27.332 "data_offset": 2048, 00:12:27.332 "data_size": 63488 00:12:27.332 } 00:12:27.332 ] 00:12:27.332 } 00:12:27.332 } 00:12:27.332 }' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.332 pt2 00:12:27.332 pt3 00:12:27.332 pt4' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.332 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.333 12:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.591 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.591 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.591 12:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.591 [2024-11-06 12:43:16.068474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da7cec35-401e-48d4-915a-e99a1ba19494 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z da7cec35-401e-48d4-915a-e99a1ba19494 ']' 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.591 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.591 [2024-11-06 12:43:16.120111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.591 [2024-11-06 12:43:16.120143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.592 [2024-11-06 12:43:16.120275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.592 [2024-11-06 12:43:16.120405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.592 [2024-11-06 12:43:16.120429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.592 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.851 [2024-11-06 12:43:16.284178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:27.851 [2024-11-06 12:43:16.286648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:27.851 [2024-11-06 12:43:16.286724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:27.851 [2024-11-06 12:43:16.286779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:27.851 [2024-11-06 12:43:16.286849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:27.851 [2024-11-06 12:43:16.286924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:27.851 [2024-11-06 12:43:16.286959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:27.851 [2024-11-06 12:43:16.286991] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:27.851 [2024-11-06 12:43:16.287013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.851 [2024-11-06 12:43:16.287029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:27.851 request: 00:12:27.851 { 00:12:27.851 "name": "raid_bdev1", 00:12:27.851 "raid_level": "raid1", 00:12:27.851 "base_bdevs": [ 00:12:27.851 "malloc1", 00:12:27.851 "malloc2", 00:12:27.851 "malloc3", 00:12:27.851 "malloc4" 00:12:27.851 ], 00:12:27.851 "superblock": false, 00:12:27.851 "method": "bdev_raid_create", 00:12:27.851 "req_id": 1 00:12:27.851 } 00:12:27.851 Got JSON-RPC error response 00:12:27.851 response: 00:12:27.851 { 00:12:27.851 "code": -17, 00:12:27.851 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:27.851 } 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.851 [2024-11-06 12:43:16.356173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.851 [2024-11-06 12:43:16.356418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.851 [2024-11-06 12:43:16.356491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.851 [2024-11-06 12:43:16.356649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.851 [2024-11-06 12:43:16.359666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.851 [2024-11-06 12:43:16.359841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.851 [2024-11-06 12:43:16.360048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.851 [2024-11-06 12:43:16.360256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.851 pt1 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:27.851 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.852 "name": "raid_bdev1", 00:12:27.852 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:27.852 "strip_size_kb": 0, 00:12:27.852 "state": "configuring", 00:12:27.852 "raid_level": "raid1", 00:12:27.852 "superblock": true, 00:12:27.852 "num_base_bdevs": 4, 00:12:27.852 "num_base_bdevs_discovered": 1, 00:12:27.852 "num_base_bdevs_operational": 4, 00:12:27.852 "base_bdevs_list": [ 00:12:27.852 { 00:12:27.852 "name": "pt1", 00:12:27.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.852 "is_configured": true, 00:12:27.852 "data_offset": 2048, 00:12:27.852 "data_size": 63488 00:12:27.852 }, 00:12:27.852 { 00:12:27.852 "name": null, 00:12:27.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.852 "is_configured": false, 00:12:27.852 "data_offset": 2048, 00:12:27.852 "data_size": 63488 00:12:27.852 }, 00:12:27.852 { 00:12:27.852 "name": null, 00:12:27.852 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.852 "is_configured": false, 00:12:27.852 "data_offset": 2048, 00:12:27.852 "data_size": 63488 00:12:27.852 }, 00:12:27.852 { 00:12:27.852 "name": null, 00:12:27.852 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.852 "is_configured": false, 00:12:27.852 "data_offset": 2048, 00:12:27.852 "data_size": 63488 00:12:27.852 } 00:12:27.852 ] 00:12:27.852 }' 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.852 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.419 [2024-11-06 12:43:16.864760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.419 [2024-11-06 12:43:16.864849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.419 [2024-11-06 12:43:16.864879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:28.419 [2024-11-06 12:43:16.864898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.419 [2024-11-06 12:43:16.865470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.419 [2024-11-06 12:43:16.865510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.419 [2024-11-06 12:43:16.865620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.419 [2024-11-06 12:43:16.865665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.419 pt2 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.419 [2024-11-06 12:43:16.872749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.419 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.420 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.420 "name": "raid_bdev1", 00:12:28.420 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:28.420 "strip_size_kb": 0, 00:12:28.420 "state": "configuring", 00:12:28.420 "raid_level": "raid1", 00:12:28.420 "superblock": true, 00:12:28.420 "num_base_bdevs": 4, 00:12:28.420 "num_base_bdevs_discovered": 1, 00:12:28.420 "num_base_bdevs_operational": 4, 00:12:28.420 "base_bdevs_list": [ 00:12:28.420 { 00:12:28.420 "name": "pt1", 00:12:28.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.420 "is_configured": true, 00:12:28.420 "data_offset": 2048, 00:12:28.420 "data_size": 63488 00:12:28.420 }, 00:12:28.420 { 00:12:28.420 "name": null, 00:12:28.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.420 "is_configured": false, 00:12:28.420 "data_offset": 0, 00:12:28.420 "data_size": 63488 00:12:28.420 }, 00:12:28.420 { 00:12:28.420 "name": null, 00:12:28.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.420 "is_configured": false, 00:12:28.420 "data_offset": 2048, 00:12:28.420 "data_size": 63488 00:12:28.420 }, 00:12:28.420 { 00:12:28.420 "name": null, 00:12:28.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.420 "is_configured": false, 00:12:28.420 "data_offset": 2048, 00:12:28.420 "data_size": 63488 00:12:28.420 } 00:12:28.420 ] 00:12:28.420 }' 00:12:28.420 12:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.420 12:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.987 [2024-11-06 12:43:17.356866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.987 [2024-11-06 12:43:17.356944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.987 [2024-11-06 12:43:17.356983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:28.987 [2024-11-06 12:43:17.357000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.987 [2024-11-06 12:43:17.357595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.987 [2024-11-06 12:43:17.357627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.987 [2024-11-06 12:43:17.357732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.987 [2024-11-06 12:43:17.357764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.987 pt2 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.987 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.988 [2024-11-06 12:43:17.364829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.988 [2024-11-06 12:43:17.365017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.988 [2024-11-06 12:43:17.365054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:28.988 [2024-11-06 12:43:17.365069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.988 [2024-11-06 12:43:17.365509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.988 [2024-11-06 12:43:17.365545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.988 [2024-11-06 12:43:17.365628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:28.988 [2024-11-06 12:43:17.365656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.988 pt3 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.988 [2024-11-06 12:43:17.372814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:28.988 [2024-11-06 12:43:17.372868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.988 [2024-11-06 12:43:17.372894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:28.988 [2024-11-06 12:43:17.372907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.988 [2024-11-06 12:43:17.373380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.988 [2024-11-06 12:43:17.373414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:28.988 [2024-11-06 12:43:17.373493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:28.988 [2024-11-06 12:43:17.373526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:28.988 [2024-11-06 12:43:17.373708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:28.988 [2024-11-06 12:43:17.373736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:28.988 [2024-11-06 12:43:17.374073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:28.988 [2024-11-06 12:43:17.374282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:28.988 [2024-11-06 12:43:17.374303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:28.988 [2024-11-06 12:43:17.374461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.988 pt4 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.988 "name": "raid_bdev1", 00:12:28.988 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:28.988 "strip_size_kb": 0, 00:12:28.988 "state": "online", 00:12:28.988 "raid_level": "raid1", 00:12:28.988 "superblock": true, 00:12:28.988 "num_base_bdevs": 4, 00:12:28.988 "num_base_bdevs_discovered": 4, 00:12:28.988 "num_base_bdevs_operational": 4, 00:12:28.988 "base_bdevs_list": [ 00:12:28.988 { 00:12:28.988 "name": "pt1", 00:12:28.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.988 "is_configured": true, 00:12:28.988 "data_offset": 2048, 00:12:28.988 "data_size": 63488 00:12:28.988 }, 00:12:28.988 { 00:12:28.988 "name": "pt2", 00:12:28.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.988 "is_configured": true, 00:12:28.988 "data_offset": 2048, 00:12:28.988 "data_size": 63488 00:12:28.988 }, 00:12:28.988 { 00:12:28.988 "name": "pt3", 00:12:28.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.988 "is_configured": true, 00:12:28.988 "data_offset": 2048, 00:12:28.988 "data_size": 63488 00:12:28.988 }, 00:12:28.988 { 00:12:28.988 "name": "pt4", 00:12:28.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.988 "is_configured": true, 00:12:28.988 "data_offset": 2048, 00:12:28.988 "data_size": 63488 00:12:28.988 } 00:12:28.988 ] 00:12:28.988 }' 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.988 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.246 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.246 [2024-11-06 12:43:17.881423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.523 12:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.523 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.523 "name": "raid_bdev1", 00:12:29.523 "aliases": [ 00:12:29.523 "da7cec35-401e-48d4-915a-e99a1ba19494" 00:12:29.523 ], 00:12:29.523 "product_name": "Raid Volume", 00:12:29.523 "block_size": 512, 00:12:29.523 "num_blocks": 63488, 00:12:29.523 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:29.523 "assigned_rate_limits": { 00:12:29.523 "rw_ios_per_sec": 0, 00:12:29.523 "rw_mbytes_per_sec": 0, 00:12:29.523 "r_mbytes_per_sec": 0, 00:12:29.523 "w_mbytes_per_sec": 0 00:12:29.523 }, 00:12:29.523 "claimed": false, 00:12:29.523 "zoned": false, 00:12:29.523 "supported_io_types": { 00:12:29.523 "read": true, 00:12:29.523 "write": true, 00:12:29.523 "unmap": false, 00:12:29.523 "flush": false, 00:12:29.523 "reset": true, 00:12:29.523 "nvme_admin": false, 00:12:29.523 "nvme_io": false, 00:12:29.523 "nvme_io_md": false, 00:12:29.523 "write_zeroes": true, 00:12:29.523 "zcopy": false, 00:12:29.523 "get_zone_info": false, 00:12:29.523 "zone_management": false, 00:12:29.523 "zone_append": false, 00:12:29.523 "compare": false, 00:12:29.523 "compare_and_write": false, 00:12:29.523 "abort": false, 00:12:29.523 "seek_hole": false, 00:12:29.523 "seek_data": false, 00:12:29.523 "copy": false, 00:12:29.523 "nvme_iov_md": false 00:12:29.523 }, 00:12:29.523 "memory_domains": [ 00:12:29.523 { 00:12:29.523 "dma_device_id": "system", 00:12:29.523 "dma_device_type": 1 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.523 "dma_device_type": 2 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "system", 00:12:29.523 "dma_device_type": 1 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.523 "dma_device_type": 2 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "system", 00:12:29.523 "dma_device_type": 1 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.523 "dma_device_type": 2 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "system", 00:12:29.523 "dma_device_type": 1 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.523 "dma_device_type": 2 00:12:29.523 } 00:12:29.523 ], 00:12:29.523 "driver_specific": { 00:12:29.523 "raid": { 00:12:29.523 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:29.523 "strip_size_kb": 0, 00:12:29.523 "state": "online", 00:12:29.523 "raid_level": "raid1", 00:12:29.523 "superblock": true, 00:12:29.523 "num_base_bdevs": 4, 00:12:29.523 "num_base_bdevs_discovered": 4, 00:12:29.523 "num_base_bdevs_operational": 4, 00:12:29.523 "base_bdevs_list": [ 00:12:29.523 { 00:12:29.523 "name": "pt1", 00:12:29.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.523 "is_configured": true, 00:12:29.523 "data_offset": 2048, 00:12:29.523 "data_size": 63488 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "name": "pt2", 00:12:29.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.523 "is_configured": true, 00:12:29.523 "data_offset": 2048, 00:12:29.523 "data_size": 63488 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "name": "pt3", 00:12:29.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.523 "is_configured": true, 00:12:29.523 "data_offset": 2048, 00:12:29.523 "data_size": 63488 00:12:29.523 }, 00:12:29.523 { 00:12:29.523 "name": "pt4", 00:12:29.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.523 "is_configured": true, 00:12:29.523 "data_offset": 2048, 00:12:29.523 "data_size": 63488 00:12:29.523 } 00:12:29.523 ] 00:12:29.523 } 00:12:29.523 } 00:12:29.523 }' 00:12:29.523 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.524 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:29.524 pt2 00:12:29.524 pt3 00:12:29.524 pt4' 00:12:29.524 12:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.524 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 [2024-11-06 12:43:18.237484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' da7cec35-401e-48d4-915a-e99a1ba19494 '!=' da7cec35-401e-48d4-915a-e99a1ba19494 ']' 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 [2024-11-06 12:43:18.293144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.782 "name": "raid_bdev1", 00:12:29.782 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:29.782 "strip_size_kb": 0, 00:12:29.782 "state": "online", 00:12:29.782 "raid_level": "raid1", 00:12:29.782 "superblock": true, 00:12:29.782 "num_base_bdevs": 4, 00:12:29.782 "num_base_bdevs_discovered": 3, 00:12:29.782 "num_base_bdevs_operational": 3, 00:12:29.782 "base_bdevs_list": [ 00:12:29.782 { 00:12:29.782 "name": null, 00:12:29.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.782 "is_configured": false, 00:12:29.782 "data_offset": 0, 00:12:29.782 "data_size": 63488 00:12:29.782 }, 00:12:29.782 { 00:12:29.782 "name": "pt2", 00:12:29.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.782 "is_configured": true, 00:12:29.782 "data_offset": 2048, 00:12:29.782 "data_size": 63488 00:12:29.782 }, 00:12:29.782 { 00:12:29.782 "name": "pt3", 00:12:29.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.782 "is_configured": true, 00:12:29.782 "data_offset": 2048, 00:12:29.782 "data_size": 63488 00:12:29.782 }, 00:12:29.782 { 00:12:29.782 "name": "pt4", 00:12:29.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.782 "is_configured": true, 00:12:29.782 "data_offset": 2048, 00:12:29.782 "data_size": 63488 00:12:29.782 } 00:12:29.782 ] 00:12:29.782 }' 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.782 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 [2024-11-06 12:43:18.813298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.351 [2024-11-06 12:43:18.813370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.351 [2024-11-06 12:43:18.813466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.351 [2024-11-06 12:43:18.813567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.351 [2024-11-06 12:43:18.813584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 [2024-11-06 12:43:18.917305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.351 [2024-11-06 12:43:18.917397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.351 [2024-11-06 12:43:18.917426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:30.351 [2024-11-06 12:43:18.917441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.351 [2024-11-06 12:43:18.920411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.351 [2024-11-06 12:43:18.920456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.351 [2024-11-06 12:43:18.920576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.351 [2024-11-06 12:43:18.920636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.351 pt2 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.351 "name": "raid_bdev1", 00:12:30.351 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:30.351 "strip_size_kb": 0, 00:12:30.351 "state": "configuring", 00:12:30.351 "raid_level": "raid1", 00:12:30.351 "superblock": true, 00:12:30.351 "num_base_bdevs": 4, 00:12:30.351 "num_base_bdevs_discovered": 1, 00:12:30.351 "num_base_bdevs_operational": 3, 00:12:30.351 "base_bdevs_list": [ 00:12:30.351 { 00:12:30.351 "name": null, 00:12:30.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.351 "is_configured": false, 00:12:30.351 "data_offset": 2048, 00:12:30.351 "data_size": 63488 00:12:30.351 }, 00:12:30.351 { 00:12:30.351 "name": "pt2", 00:12:30.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.351 "is_configured": true, 00:12:30.351 "data_offset": 2048, 00:12:30.351 "data_size": 63488 00:12:30.351 }, 00:12:30.351 { 00:12:30.351 "name": null, 00:12:30.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.351 "is_configured": false, 00:12:30.351 "data_offset": 2048, 00:12:30.351 "data_size": 63488 00:12:30.351 }, 00:12:30.351 { 00:12:30.351 "name": null, 00:12:30.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.351 "is_configured": false, 00:12:30.351 "data_offset": 2048, 00:12:30.351 "data_size": 63488 00:12:30.351 } 00:12:30.351 ] 00:12:30.351 }' 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.351 12:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.918 [2024-11-06 12:43:19.445482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.918 [2024-11-06 12:43:19.445578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.918 [2024-11-06 12:43:19.445611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:30.918 [2024-11-06 12:43:19.445626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.918 [2024-11-06 12:43:19.446242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.918 [2024-11-06 12:43:19.446606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.918 [2024-11-06 12:43:19.446759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.918 [2024-11-06 12:43:19.446915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.918 pt3 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.918 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.918 "name": "raid_bdev1", 00:12:30.918 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:30.918 "strip_size_kb": 0, 00:12:30.918 "state": "configuring", 00:12:30.918 "raid_level": "raid1", 00:12:30.919 "superblock": true, 00:12:30.919 "num_base_bdevs": 4, 00:12:30.919 "num_base_bdevs_discovered": 2, 00:12:30.919 "num_base_bdevs_operational": 3, 00:12:30.919 "base_bdevs_list": [ 00:12:30.919 { 00:12:30.919 "name": null, 00:12:30.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.919 "is_configured": false, 00:12:30.919 "data_offset": 2048, 00:12:30.919 "data_size": 63488 00:12:30.919 }, 00:12:30.919 { 00:12:30.919 "name": "pt2", 00:12:30.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.919 "is_configured": true, 00:12:30.919 "data_offset": 2048, 00:12:30.919 "data_size": 63488 00:12:30.919 }, 00:12:30.919 { 00:12:30.919 "name": "pt3", 00:12:30.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.919 "is_configured": true, 00:12:30.919 "data_offset": 2048, 00:12:30.919 "data_size": 63488 00:12:30.919 }, 00:12:30.919 { 00:12:30.919 "name": null, 00:12:30.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.919 "is_configured": false, 00:12:30.919 "data_offset": 2048, 00:12:30.919 "data_size": 63488 00:12:30.919 } 00:12:30.919 ] 00:12:30.919 }' 00:12:30.919 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.919 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.485 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:31.485 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:31.485 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:31.485 12:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:31.485 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.485 12:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.485 [2024-11-06 12:43:19.997665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:31.485 [2024-11-06 12:43:19.997993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.485 [2024-11-06 12:43:19.998044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:31.485 [2024-11-06 12:43:19.998061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.485 [2024-11-06 12:43:19.998624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.485 [2024-11-06 12:43:19.998649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:31.485 [2024-11-06 12:43:19.998753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:31.485 [2024-11-06 12:43:19.998792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:31.485 [2024-11-06 12:43:19.998961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:31.485 [2024-11-06 12:43:19.998976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.485 [2024-11-06 12:43:19.999299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:31.485 [2024-11-06 12:43:19.999508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:31.485 [2024-11-06 12:43:19.999529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:31.485 [2024-11-06 12:43:19.999701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.485 pt4 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.485 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.485 "name": "raid_bdev1", 00:12:31.485 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:31.485 "strip_size_kb": 0, 00:12:31.485 "state": "online", 00:12:31.485 "raid_level": "raid1", 00:12:31.485 "superblock": true, 00:12:31.485 "num_base_bdevs": 4, 00:12:31.485 "num_base_bdevs_discovered": 3, 00:12:31.485 "num_base_bdevs_operational": 3, 00:12:31.485 "base_bdevs_list": [ 00:12:31.485 { 00:12:31.485 "name": null, 00:12:31.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.486 "is_configured": false, 00:12:31.486 "data_offset": 2048, 00:12:31.486 "data_size": 63488 00:12:31.486 }, 00:12:31.486 { 00:12:31.486 "name": "pt2", 00:12:31.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.486 "is_configured": true, 00:12:31.486 "data_offset": 2048, 00:12:31.486 "data_size": 63488 00:12:31.486 }, 00:12:31.486 { 00:12:31.486 "name": "pt3", 00:12:31.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.486 "is_configured": true, 00:12:31.486 "data_offset": 2048, 00:12:31.486 "data_size": 63488 00:12:31.486 }, 00:12:31.486 { 00:12:31.486 "name": "pt4", 00:12:31.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.486 "is_configured": true, 00:12:31.486 "data_offset": 2048, 00:12:31.486 "data_size": 63488 00:12:31.486 } 00:12:31.486 ] 00:12:31.486 }' 00:12:31.486 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.486 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 [2024-11-06 12:43:20.517755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.054 [2024-11-06 12:43:20.517809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.054 [2024-11-06 12:43:20.517906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.054 [2024-11-06 12:43:20.518003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.054 [2024-11-06 12:43:20.518023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.054 [2024-11-06 12:43:20.589767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:32.054 [2024-11-06 12:43:20.589871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.054 [2024-11-06 12:43:20.589900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:32.054 [2024-11-06 12:43:20.589918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.054 [2024-11-06 12:43:20.592821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.054 [2024-11-06 12:43:20.593129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:32.054 [2024-11-06 12:43:20.593282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:32.054 [2024-11-06 12:43:20.593351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:32.054 [2024-11-06 12:43:20.593517] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:32.054 [2024-11-06 12:43:20.593544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.054 [2024-11-06 12:43:20.593565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:32.054 [2024-11-06 12:43:20.593646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:32.054 [2024-11-06 12:43:20.593797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:32.054 pt1 00:12:32.054 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.055 "name": "raid_bdev1", 00:12:32.055 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:32.055 "strip_size_kb": 0, 00:12:32.055 "state": "configuring", 00:12:32.055 "raid_level": "raid1", 00:12:32.055 "superblock": true, 00:12:32.055 "num_base_bdevs": 4, 00:12:32.055 "num_base_bdevs_discovered": 2, 00:12:32.055 "num_base_bdevs_operational": 3, 00:12:32.055 "base_bdevs_list": [ 00:12:32.055 { 00:12:32.055 "name": null, 00:12:32.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.055 "is_configured": false, 00:12:32.055 "data_offset": 2048, 00:12:32.055 "data_size": 63488 00:12:32.055 }, 00:12:32.055 { 00:12:32.055 "name": "pt2", 00:12:32.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.055 "is_configured": true, 00:12:32.055 "data_offset": 2048, 00:12:32.055 "data_size": 63488 00:12:32.055 }, 00:12:32.055 { 00:12:32.055 "name": "pt3", 00:12:32.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.055 "is_configured": true, 00:12:32.055 "data_offset": 2048, 00:12:32.055 "data_size": 63488 00:12:32.055 }, 00:12:32.055 { 00:12:32.055 "name": null, 00:12:32.055 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.055 "is_configured": false, 00:12:32.055 "data_offset": 2048, 00:12:32.055 "data_size": 63488 00:12:32.055 } 00:12:32.055 ] 00:12:32.055 }' 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.055 12:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.640 [2024-11-06 12:43:21.170060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:32.640 [2024-11-06 12:43:21.170173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.640 [2024-11-06 12:43:21.170232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:32.640 [2024-11-06 12:43:21.170250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.640 [2024-11-06 12:43:21.170803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.640 [2024-11-06 12:43:21.170829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:32.640 [2024-11-06 12:43:21.170933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:32.640 [2024-11-06 12:43:21.170966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:32.640 [2024-11-06 12:43:21.171127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:32.640 [2024-11-06 12:43:21.171142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.640 [2024-11-06 12:43:21.171514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:32.640 [2024-11-06 12:43:21.171698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:32.640 [2024-11-06 12:43:21.171719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:32.640 [2024-11-06 12:43:21.171887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.640 pt4 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.640 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.641 "name": "raid_bdev1", 00:12:32.641 "uuid": "da7cec35-401e-48d4-915a-e99a1ba19494", 00:12:32.641 "strip_size_kb": 0, 00:12:32.641 "state": "online", 00:12:32.641 "raid_level": "raid1", 00:12:32.641 "superblock": true, 00:12:32.641 "num_base_bdevs": 4, 00:12:32.641 "num_base_bdevs_discovered": 3, 00:12:32.641 "num_base_bdevs_operational": 3, 00:12:32.641 "base_bdevs_list": [ 00:12:32.641 { 00:12:32.641 "name": null, 00:12:32.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.641 "is_configured": false, 00:12:32.641 "data_offset": 2048, 00:12:32.641 "data_size": 63488 00:12:32.641 }, 00:12:32.641 { 00:12:32.641 "name": "pt2", 00:12:32.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.641 "is_configured": true, 00:12:32.641 "data_offset": 2048, 00:12:32.641 "data_size": 63488 00:12:32.641 }, 00:12:32.641 { 00:12:32.641 "name": "pt3", 00:12:32.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.641 "is_configured": true, 00:12:32.641 "data_offset": 2048, 00:12:32.641 "data_size": 63488 00:12:32.641 }, 00:12:32.641 { 00:12:32.641 "name": "pt4", 00:12:32.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.641 "is_configured": true, 00:12:32.641 "data_offset": 2048, 00:12:32.641 "data_size": 63488 00:12:32.641 } 00:12:32.641 ] 00:12:32.641 }' 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.641 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:33.207 [2024-11-06 12:43:21.734565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' da7cec35-401e-48d4-915a-e99a1ba19494 '!=' da7cec35-401e-48d4-915a-e99a1ba19494 ']' 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74708 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74708 ']' 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74708 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74708 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:33.207 killing process with pid 74708 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74708' 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74708 00:12:33.207 [2024-11-06 12:43:21.823600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.207 12:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74708 00:12:33.207 [2024-11-06 12:43:21.823707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.207 [2024-11-06 12:43:21.823802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.207 [2024-11-06 12:43:21.823822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:33.777 [2024-11-06 12:43:22.173826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.714 ************************************ 00:12:34.715 END TEST raid_superblock_test 00:12:34.715 ************************************ 00:12:34.715 12:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:34.715 00:12:34.715 real 0m9.403s 00:12:34.715 user 0m15.450s 00:12:34.715 sys 0m1.410s 00:12:34.715 12:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:34.715 12:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.715 12:43:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:34.715 12:43:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:34.715 12:43:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:34.715 12:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.715 ************************************ 00:12:34.715 START TEST raid_read_error_test 00:12:34.715 ************************************ 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oQM0dYnXl6 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75208 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75208 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75208 ']' 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.715 12:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.988 [2024-11-06 12:43:23.386577] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:12:34.988 [2024-11-06 12:43:23.387029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75208 ] 00:12:34.988 [2024-11-06 12:43:23.569403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.248 [2024-11-06 12:43:23.698059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.248 [2024-11-06 12:43:23.900433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.248 [2024-11-06 12:43:23.900731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 BaseBdev1_malloc 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 true 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.817 [2024-11-06 12:43:24.460417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.817 [2024-11-06 12:43:24.460504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.817 [2024-11-06 12:43:24.460540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:35.817 [2024-11-06 12:43:24.460561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.817 [2024-11-06 12:43:24.463317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.817 [2024-11-06 12:43:24.463379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.817 BaseBdev1 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.817 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 BaseBdev2_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 true 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 [2024-11-06 12:43:24.515964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:36.077 [2024-11-06 12:43:24.516058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.077 [2024-11-06 12:43:24.516083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:36.077 [2024-11-06 12:43:24.516100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.077 [2024-11-06 12:43:24.518797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.077 [2024-11-06 12:43:24.519058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.077 BaseBdev2 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 BaseBdev3_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 true 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 [2024-11-06 12:43:24.589491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:36.077 [2024-11-06 12:43:24.589778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.077 [2024-11-06 12:43:24.589816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:36.077 [2024-11-06 12:43:24.589835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.077 [2024-11-06 12:43:24.592638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.077 [2024-11-06 12:43:24.592689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:36.077 BaseBdev3 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 BaseBdev4_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 true 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 [2024-11-06 12:43:24.645535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:36.077 [2024-11-06 12:43:24.645614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.077 [2024-11-06 12:43:24.645641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.077 [2024-11-06 12:43:24.645659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.077 [2024-11-06 12:43:24.648415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.077 [2024-11-06 12:43:24.648460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:36.077 BaseBdev4 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.077 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.078 [2024-11-06 12:43:24.653611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.078 [2024-11-06 12:43:24.656028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.078 [2024-11-06 12:43:24.656137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.078 [2024-11-06 12:43:24.656258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.078 [2024-11-06 12:43:24.656559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:36.078 [2024-11-06 12:43:24.656592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.078 [2024-11-06 12:43:24.656906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:36.078 [2024-11-06 12:43:24.657131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:36.078 [2024-11-06 12:43:24.657154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:36.078 [2024-11-06 12:43:24.657364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.078 "name": "raid_bdev1", 00:12:36.078 "uuid": "e246bab2-8145-4960-b9c0-4fd11f90deee", 00:12:36.078 "strip_size_kb": 0, 00:12:36.078 "state": "online", 00:12:36.078 "raid_level": "raid1", 00:12:36.078 "superblock": true, 00:12:36.078 "num_base_bdevs": 4, 00:12:36.078 "num_base_bdevs_discovered": 4, 00:12:36.078 "num_base_bdevs_operational": 4, 00:12:36.078 "base_bdevs_list": [ 00:12:36.078 { 00:12:36.078 "name": "BaseBdev1", 00:12:36.078 "uuid": "845aa3c5-a60b-5dd8-a436-5c4ad0a4515e", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 }, 00:12:36.078 { 00:12:36.078 "name": "BaseBdev2", 00:12:36.078 "uuid": "44f06dd4-7026-567e-9b3a-4321929adef4", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 }, 00:12:36.078 { 00:12:36.078 "name": "BaseBdev3", 00:12:36.078 "uuid": "139267d0-8ff9-5b72-b73a-bd5a534b1b51", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 }, 00:12:36.078 { 00:12:36.078 "name": "BaseBdev4", 00:12:36.078 "uuid": "6319149f-7458-5a1e-beba-036611aa5a5b", 00:12:36.078 "is_configured": true, 00:12:36.078 "data_offset": 2048, 00:12:36.078 "data_size": 63488 00:12:36.078 } 00:12:36.078 ] 00:12:36.078 }' 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.078 12:43:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.644 12:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:36.644 12:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.644 [2024-11-06 12:43:25.259472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.581 "name": "raid_bdev1", 00:12:37.581 "uuid": "e246bab2-8145-4960-b9c0-4fd11f90deee", 00:12:37.581 "strip_size_kb": 0, 00:12:37.581 "state": "online", 00:12:37.581 "raid_level": "raid1", 00:12:37.581 "superblock": true, 00:12:37.581 "num_base_bdevs": 4, 00:12:37.581 "num_base_bdevs_discovered": 4, 00:12:37.581 "num_base_bdevs_operational": 4, 00:12:37.581 "base_bdevs_list": [ 00:12:37.581 { 00:12:37.581 "name": "BaseBdev1", 00:12:37.581 "uuid": "845aa3c5-a60b-5dd8-a436-5c4ad0a4515e", 00:12:37.581 "is_configured": true, 00:12:37.581 "data_offset": 2048, 00:12:37.581 "data_size": 63488 00:12:37.581 }, 00:12:37.581 { 00:12:37.581 "name": "BaseBdev2", 00:12:37.581 "uuid": "44f06dd4-7026-567e-9b3a-4321929adef4", 00:12:37.581 "is_configured": true, 00:12:37.581 "data_offset": 2048, 00:12:37.581 "data_size": 63488 00:12:37.581 }, 00:12:37.581 { 00:12:37.581 "name": "BaseBdev3", 00:12:37.581 "uuid": "139267d0-8ff9-5b72-b73a-bd5a534b1b51", 00:12:37.581 "is_configured": true, 00:12:37.581 "data_offset": 2048, 00:12:37.581 "data_size": 63488 00:12:37.581 }, 00:12:37.581 { 00:12:37.581 "name": "BaseBdev4", 00:12:37.581 "uuid": "6319149f-7458-5a1e-beba-036611aa5a5b", 00:12:37.581 "is_configured": true, 00:12:37.581 "data_offset": 2048, 00:12:37.581 "data_size": 63488 00:12:37.581 } 00:12:37.581 ] 00:12:37.581 }' 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.581 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.150 [2024-11-06 12:43:26.621242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.150 [2024-11-06 12:43:26.621301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.150 [2024-11-06 12:43:26.624606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.150 [2024-11-06 12:43:26.624688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.150 [2024-11-06 12:43:26.624842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.150 [2024-11-06 12:43:26.624863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:38.150 { 00:12:38.150 "results": [ 00:12:38.150 { 00:12:38.150 "job": "raid_bdev1", 00:12:38.150 "core_mask": "0x1", 00:12:38.150 "workload": "randrw", 00:12:38.150 "percentage": 50, 00:12:38.150 "status": "finished", 00:12:38.150 "queue_depth": 1, 00:12:38.150 "io_size": 131072, 00:12:38.150 "runtime": 1.35933, 00:12:38.150 "iops": 7631.700911478449, 00:12:38.150 "mibps": 953.9626139348061, 00:12:38.150 "io_failed": 0, 00:12:38.150 "io_timeout": 0, 00:12:38.150 "avg_latency_us": 126.72646090751354, 00:12:38.150 "min_latency_us": 41.192727272727275, 00:12:38.150 "max_latency_us": 1846.9236363636364 00:12:38.150 } 00:12:38.150 ], 00:12:38.150 "core_count": 1 00:12:38.150 } 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75208 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75208 ']' 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75208 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75208 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:38.150 killing process with pid 75208 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75208' 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75208 00:12:38.150 [2024-11-06 12:43:26.656488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.150 12:43:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75208 00:12:38.409 [2024-11-06 12:43:26.950965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oQM0dYnXl6 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:39.801 00:12:39.801 real 0m4.792s 00:12:39.801 user 0m5.851s 00:12:39.801 sys 0m0.590s 00:12:39.801 ************************************ 00:12:39.801 END TEST raid_read_error_test 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.801 12:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.801 ************************************ 00:12:39.801 12:43:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:39.801 12:43:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:39.801 12:43:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:39.801 12:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.801 ************************************ 00:12:39.801 START TEST raid_write_error_test 00:12:39.801 ************************************ 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:39.801 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HRbF97YfxS 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75354 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75354 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75354 ']' 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:39.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:39.802 12:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.802 [2024-11-06 12:43:28.237893] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:12:39.802 [2024-11-06 12:43:28.238092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75354 ] 00:12:39.802 [2024-11-06 12:43:28.427162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.060 [2024-11-06 12:43:28.621600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.319 [2024-11-06 12:43:28.825787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.319 [2024-11-06 12:43:28.825847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 BaseBdev1_malloc 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 true 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 [2024-11-06 12:43:29.351226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:40.887 [2024-11-06 12:43:29.351305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.887 [2024-11-06 12:43:29.351344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:40.887 [2024-11-06 12:43:29.351363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.887 [2024-11-06 12:43:29.354115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.887 [2024-11-06 12:43:29.354164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.887 BaseBdev1 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 BaseBdev2_malloc 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 true 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.887 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.887 [2024-11-06 12:43:29.410896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:40.887 [2024-11-06 12:43:29.410981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.887 [2024-11-06 12:43:29.411008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:40.888 [2024-11-06 12:43:29.411027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.888 [2024-11-06 12:43:29.413804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.888 [2024-11-06 12:43:29.413852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.888 BaseBdev2 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 BaseBdev3_malloc 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 true 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 [2024-11-06 12:43:29.479786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:40.888 [2024-11-06 12:43:29.479867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.888 [2024-11-06 12:43:29.479893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:40.888 [2024-11-06 12:43:29.479912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.888 [2024-11-06 12:43:29.482642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.888 [2024-11-06 12:43:29.482688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:40.888 BaseBdev3 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 BaseBdev4_malloc 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 true 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 [2024-11-06 12:43:29.535709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:40.888 [2024-11-06 12:43:29.535792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.888 [2024-11-06 12:43:29.535820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:40.888 [2024-11-06 12:43:29.535839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.888 [2024-11-06 12:43:29.538636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.888 [2024-11-06 12:43:29.538688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:40.888 BaseBdev4 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.888 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.147 [2024-11-06 12:43:29.547851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.147 [2024-11-06 12:43:29.550362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.147 [2024-11-06 12:43:29.550477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.147 [2024-11-06 12:43:29.550580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.147 [2024-11-06 12:43:29.550905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:41.147 [2024-11-06 12:43:29.550929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.147 [2024-11-06 12:43:29.551302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:41.147 [2024-11-06 12:43:29.551566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:41.147 [2024-11-06 12:43:29.551582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:41.147 [2024-11-06 12:43:29.551848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.147 "name": "raid_bdev1", 00:12:41.147 "uuid": "2bc3e4e6-b448-4871-9fc2-c22ee843badf", 00:12:41.147 "strip_size_kb": 0, 00:12:41.147 "state": "online", 00:12:41.147 "raid_level": "raid1", 00:12:41.147 "superblock": true, 00:12:41.147 "num_base_bdevs": 4, 00:12:41.147 "num_base_bdevs_discovered": 4, 00:12:41.147 "num_base_bdevs_operational": 4, 00:12:41.147 "base_bdevs_list": [ 00:12:41.147 { 00:12:41.147 "name": "BaseBdev1", 00:12:41.147 "uuid": "a029b977-0231-5d64-ac85-80b0a4955424", 00:12:41.147 "is_configured": true, 00:12:41.147 "data_offset": 2048, 00:12:41.147 "data_size": 63488 00:12:41.147 }, 00:12:41.147 { 00:12:41.147 "name": "BaseBdev2", 00:12:41.147 "uuid": "9efb5728-aac7-5650-ac42-e1ca3b61e728", 00:12:41.147 "is_configured": true, 00:12:41.147 "data_offset": 2048, 00:12:41.147 "data_size": 63488 00:12:41.147 }, 00:12:41.147 { 00:12:41.147 "name": "BaseBdev3", 00:12:41.147 "uuid": "e8a2618c-fa07-5c60-973b-1809901fe34c", 00:12:41.147 "is_configured": true, 00:12:41.147 "data_offset": 2048, 00:12:41.147 "data_size": 63488 00:12:41.147 }, 00:12:41.147 { 00:12:41.147 "name": "BaseBdev4", 00:12:41.147 "uuid": "ffb72efe-b6ff-552c-beb1-410a2524bfd5", 00:12:41.147 "is_configured": true, 00:12:41.147 "data_offset": 2048, 00:12:41.147 "data_size": 63488 00:12:41.147 } 00:12:41.147 ] 00:12:41.147 }' 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.147 12:43:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.405 12:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:41.405 12:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.663 [2024-11-06 12:43:30.189354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.606 [2024-11-06 12:43:31.058740] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:42.606 [2024-11-06 12:43:31.058824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.606 [2024-11-06 12:43:31.059106] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.606 "name": "raid_bdev1", 00:12:42.606 "uuid": "2bc3e4e6-b448-4871-9fc2-c22ee843badf", 00:12:42.606 "strip_size_kb": 0, 00:12:42.606 "state": "online", 00:12:42.606 "raid_level": "raid1", 00:12:42.606 "superblock": true, 00:12:42.606 "num_base_bdevs": 4, 00:12:42.606 "num_base_bdevs_discovered": 3, 00:12:42.606 "num_base_bdevs_operational": 3, 00:12:42.606 "base_bdevs_list": [ 00:12:42.606 { 00:12:42.606 "name": null, 00:12:42.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.606 "is_configured": false, 00:12:42.606 "data_offset": 0, 00:12:42.606 "data_size": 63488 00:12:42.606 }, 00:12:42.606 { 00:12:42.606 "name": "BaseBdev2", 00:12:42.606 "uuid": "9efb5728-aac7-5650-ac42-e1ca3b61e728", 00:12:42.606 "is_configured": true, 00:12:42.606 "data_offset": 2048, 00:12:42.606 "data_size": 63488 00:12:42.606 }, 00:12:42.606 { 00:12:42.606 "name": "BaseBdev3", 00:12:42.606 "uuid": "e8a2618c-fa07-5c60-973b-1809901fe34c", 00:12:42.606 "is_configured": true, 00:12:42.606 "data_offset": 2048, 00:12:42.606 "data_size": 63488 00:12:42.606 }, 00:12:42.606 { 00:12:42.606 "name": "BaseBdev4", 00:12:42.606 "uuid": "ffb72efe-b6ff-552c-beb1-410a2524bfd5", 00:12:42.606 "is_configured": true, 00:12:42.606 "data_offset": 2048, 00:12:42.606 "data_size": 63488 00:12:42.606 } 00:12:42.606 ] 00:12:42.606 }' 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.606 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.173 [2024-11-06 12:43:31.598461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.173 [2024-11-06 12:43:31.598512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.173 [2024-11-06 12:43:31.601781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.173 [2024-11-06 12:43:31.601841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.173 [2024-11-06 12:43:31.601974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.173 [2024-11-06 12:43:31.601994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:43.173 { 00:12:43.173 "results": [ 00:12:43.173 { 00:12:43.173 "job": "raid_bdev1", 00:12:43.173 "core_mask": "0x1", 00:12:43.173 "workload": "randrw", 00:12:43.173 "percentage": 50, 00:12:43.173 "status": "finished", 00:12:43.173 "queue_depth": 1, 00:12:43.173 "io_size": 131072, 00:12:43.173 "runtime": 1.406608, 00:12:43.173 "iops": 8423.811040460456, 00:12:43.173 "mibps": 1052.976380057557, 00:12:43.173 "io_failed": 0, 00:12:43.173 "io_timeout": 0, 00:12:43.173 "avg_latency_us": 114.45619193027413, 00:12:43.173 "min_latency_us": 42.35636363636364, 00:12:43.173 "max_latency_us": 1861.8181818181818 00:12:43.173 } 00:12:43.173 ], 00:12:43.173 "core_count": 1 00:12:43.173 } 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75354 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75354 ']' 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75354 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75354 00:12:43.173 killing process with pid 75354 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75354' 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75354 00:12:43.173 [2024-11-06 12:43:31.637565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.173 12:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75354 00:12:43.431 [2024-11-06 12:43:31.926502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.364 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:44.364 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HRbF97YfxS 00:12:44.364 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:44.622 00:12:44.622 real 0m4.918s 00:12:44.622 user 0m6.097s 00:12:44.622 sys 0m0.647s 00:12:44.622 ************************************ 00:12:44.622 END TEST raid_write_error_test 00:12:44.622 ************************************ 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.622 12:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.622 12:43:33 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:44.622 12:43:33 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:44.622 12:43:33 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:44.622 12:43:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:44.622 12:43:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:44.622 12:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.622 ************************************ 00:12:44.622 START TEST raid_rebuild_test 00:12:44.622 ************************************ 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75498 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75498 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75498 ']' 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:44.622 12:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:44.622 Zero copy mechanism will not be used. 00:12:44.622 [2024-11-06 12:43:33.186040] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:12:44.622 [2024-11-06 12:43:33.186208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75498 ] 00:12:44.880 [2024-11-06 12:43:33.362994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.880 [2024-11-06 12:43:33.496435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.141 [2024-11-06 12:43:33.701091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.141 [2024-11-06 12:43:33.701180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.708 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:45.708 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.709 BaseBdev1_malloc 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.709 [2024-11-06 12:43:34.283479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:45.709 [2024-11-06 12:43:34.283563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.709 [2024-11-06 12:43:34.283597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.709 [2024-11-06 12:43:34.283618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.709 [2024-11-06 12:43:34.286398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.709 [2024-11-06 12:43:34.286582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.709 BaseBdev1 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.709 BaseBdev2_malloc 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.709 [2024-11-06 12:43:34.331978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:45.709 [2024-11-06 12:43:34.332073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.709 [2024-11-06 12:43:34.332103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.709 [2024-11-06 12:43:34.332121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.709 [2024-11-06 12:43:34.334863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.709 [2024-11-06 12:43:34.334913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.709 BaseBdev2 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.709 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.968 spare_malloc 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.968 spare_delay 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.968 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.968 [2024-11-06 12:43:34.406894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.968 [2024-11-06 12:43:34.407227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.969 [2024-11-06 12:43:34.407275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:45.969 [2024-11-06 12:43:34.407297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.969 [2024-11-06 12:43:34.410173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.969 [2024-11-06 12:43:34.410238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.969 spare 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.969 [2024-11-06 12:43:34.418928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.969 [2024-11-06 12:43:34.421295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.969 [2024-11-06 12:43:34.421420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:45.969 [2024-11-06 12:43:34.421443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:45.969 [2024-11-06 12:43:34.421771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.969 [2024-11-06 12:43:34.421984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:45.969 [2024-11-06 12:43:34.422003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:45.969 [2024-11-06 12:43:34.422186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.969 "name": "raid_bdev1", 00:12:45.969 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:45.969 "strip_size_kb": 0, 00:12:45.969 "state": "online", 00:12:45.969 "raid_level": "raid1", 00:12:45.969 "superblock": false, 00:12:45.969 "num_base_bdevs": 2, 00:12:45.969 "num_base_bdevs_discovered": 2, 00:12:45.969 "num_base_bdevs_operational": 2, 00:12:45.969 "base_bdevs_list": [ 00:12:45.969 { 00:12:45.969 "name": "BaseBdev1", 00:12:45.969 "uuid": "7031ee4d-4a0f-5fb9-ac18-311de9604ef4", 00:12:45.969 "is_configured": true, 00:12:45.969 "data_offset": 0, 00:12:45.969 "data_size": 65536 00:12:45.969 }, 00:12:45.969 { 00:12:45.969 "name": "BaseBdev2", 00:12:45.969 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:45.969 "is_configured": true, 00:12:45.969 "data_offset": 0, 00:12:45.969 "data_size": 65536 00:12:45.969 } 00:12:45.969 ] 00:12:45.969 }' 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.969 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:46.538 [2024-11-06 12:43:34.943485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.538 12:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.538 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:46.796 [2024-11-06 12:43:35.327314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:46.796 /dev/nbd0 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.796 1+0 records in 00:12:46.796 1+0 records out 00:12:46.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060116 s, 6.8 MB/s 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:46.796 12:43:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:53.370 65536+0 records in 00:12:53.370 65536+0 records out 00:12:53.370 33554432 bytes (34 MB, 32 MiB) copied, 6.61612 s, 5.1 MB/s 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.370 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:53.934 [2024-11-06 12:43:42.302893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.934 [2024-11-06 12:43:42.342983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.934 "name": "raid_bdev1", 00:12:53.934 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:53.934 "strip_size_kb": 0, 00:12:53.934 "state": "online", 00:12:53.934 "raid_level": "raid1", 00:12:53.934 "superblock": false, 00:12:53.934 "num_base_bdevs": 2, 00:12:53.934 "num_base_bdevs_discovered": 1, 00:12:53.934 "num_base_bdevs_operational": 1, 00:12:53.934 "base_bdevs_list": [ 00:12:53.934 { 00:12:53.934 "name": null, 00:12:53.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.934 "is_configured": false, 00:12:53.934 "data_offset": 0, 00:12:53.934 "data_size": 65536 00:12:53.934 }, 00:12:53.934 { 00:12:53.934 "name": "BaseBdev2", 00:12:53.934 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:53.934 "is_configured": true, 00:12:53.934 "data_offset": 0, 00:12:53.934 "data_size": 65536 00:12:53.934 } 00:12:53.934 ] 00:12:53.934 }' 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.934 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.192 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.192 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.192 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.192 [2024-11-06 12:43:42.839207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.450 [2024-11-06 12:43:42.855645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:54.450 12:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.450 12:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:54.450 [2024-11-06 12:43:42.858047] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.420 12:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.421 12:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.421 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.421 "name": "raid_bdev1", 00:12:55.421 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:55.421 "strip_size_kb": 0, 00:12:55.421 "state": "online", 00:12:55.421 "raid_level": "raid1", 00:12:55.421 "superblock": false, 00:12:55.421 "num_base_bdevs": 2, 00:12:55.421 "num_base_bdevs_discovered": 2, 00:12:55.421 "num_base_bdevs_operational": 2, 00:12:55.421 "process": { 00:12:55.421 "type": "rebuild", 00:12:55.421 "target": "spare", 00:12:55.421 "progress": { 00:12:55.421 "blocks": 20480, 00:12:55.421 "percent": 31 00:12:55.421 } 00:12:55.421 }, 00:12:55.421 "base_bdevs_list": [ 00:12:55.421 { 00:12:55.421 "name": "spare", 00:12:55.421 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:12:55.421 "is_configured": true, 00:12:55.421 "data_offset": 0, 00:12:55.421 "data_size": 65536 00:12:55.421 }, 00:12:55.421 { 00:12:55.421 "name": "BaseBdev2", 00:12:55.421 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:55.421 "is_configured": true, 00:12:55.421 "data_offset": 0, 00:12:55.421 "data_size": 65536 00:12:55.421 } 00:12:55.421 ] 00:12:55.421 }' 00:12:55.421 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.421 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.421 12:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.421 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.421 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:55.421 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.421 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.421 [2024-11-06 12:43:44.031070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.421 [2024-11-06 12:43:44.066907] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.421 [2024-11-06 12:43:44.067218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.421 [2024-11-06 12:43:44.067466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.421 [2024-11-06 12:43:44.067623] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.677 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.677 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.678 "name": "raid_bdev1", 00:12:55.678 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:55.678 "strip_size_kb": 0, 00:12:55.678 "state": "online", 00:12:55.678 "raid_level": "raid1", 00:12:55.678 "superblock": false, 00:12:55.678 "num_base_bdevs": 2, 00:12:55.678 "num_base_bdevs_discovered": 1, 00:12:55.678 "num_base_bdevs_operational": 1, 00:12:55.678 "base_bdevs_list": [ 00:12:55.678 { 00:12:55.678 "name": null, 00:12:55.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.678 "is_configured": false, 00:12:55.678 "data_offset": 0, 00:12:55.678 "data_size": 65536 00:12:55.678 }, 00:12:55.678 { 00:12:55.678 "name": "BaseBdev2", 00:12:55.678 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:55.678 "is_configured": true, 00:12:55.678 "data_offset": 0, 00:12:55.678 "data_size": 65536 00:12:55.678 } 00:12:55.678 ] 00:12:55.678 }' 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.678 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.247 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.247 "name": "raid_bdev1", 00:12:56.247 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:56.247 "strip_size_kb": 0, 00:12:56.247 "state": "online", 00:12:56.247 "raid_level": "raid1", 00:12:56.247 "superblock": false, 00:12:56.247 "num_base_bdevs": 2, 00:12:56.247 "num_base_bdevs_discovered": 1, 00:12:56.248 "num_base_bdevs_operational": 1, 00:12:56.248 "base_bdevs_list": [ 00:12:56.248 { 00:12:56.248 "name": null, 00:12:56.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.248 "is_configured": false, 00:12:56.248 "data_offset": 0, 00:12:56.248 "data_size": 65536 00:12:56.248 }, 00:12:56.248 { 00:12:56.248 "name": "BaseBdev2", 00:12:56.248 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:56.248 "is_configured": true, 00:12:56.248 "data_offset": 0, 00:12:56.248 "data_size": 65536 00:12:56.248 } 00:12:56.248 ] 00:12:56.248 }' 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.248 [2024-11-06 12:43:44.826938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.248 [2024-11-06 12:43:44.843416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.248 12:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:56.248 [2024-11-06 12:43:44.846189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.625 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.625 "name": "raid_bdev1", 00:12:57.625 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:57.625 "strip_size_kb": 0, 00:12:57.625 "state": "online", 00:12:57.625 "raid_level": "raid1", 00:12:57.625 "superblock": false, 00:12:57.625 "num_base_bdevs": 2, 00:12:57.625 "num_base_bdevs_discovered": 2, 00:12:57.625 "num_base_bdevs_operational": 2, 00:12:57.625 "process": { 00:12:57.625 "type": "rebuild", 00:12:57.626 "target": "spare", 00:12:57.626 "progress": { 00:12:57.626 "blocks": 20480, 00:12:57.626 "percent": 31 00:12:57.626 } 00:12:57.626 }, 00:12:57.626 "base_bdevs_list": [ 00:12:57.626 { 00:12:57.626 "name": "spare", 00:12:57.626 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:12:57.626 "is_configured": true, 00:12:57.626 "data_offset": 0, 00:12:57.626 "data_size": 65536 00:12:57.626 }, 00:12:57.626 { 00:12:57.626 "name": "BaseBdev2", 00:12:57.626 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:57.626 "is_configured": true, 00:12:57.626 "data_offset": 0, 00:12:57.626 "data_size": 65536 00:12:57.626 } 00:12:57.626 ] 00:12:57.626 }' 00:12:57.626 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.626 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.626 12:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.626 "name": "raid_bdev1", 00:12:57.626 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:57.626 "strip_size_kb": 0, 00:12:57.626 "state": "online", 00:12:57.626 "raid_level": "raid1", 00:12:57.626 "superblock": false, 00:12:57.626 "num_base_bdevs": 2, 00:12:57.626 "num_base_bdevs_discovered": 2, 00:12:57.626 "num_base_bdevs_operational": 2, 00:12:57.626 "process": { 00:12:57.626 "type": "rebuild", 00:12:57.626 "target": "spare", 00:12:57.626 "progress": { 00:12:57.626 "blocks": 22528, 00:12:57.626 "percent": 34 00:12:57.626 } 00:12:57.626 }, 00:12:57.626 "base_bdevs_list": [ 00:12:57.626 { 00:12:57.626 "name": "spare", 00:12:57.626 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:12:57.626 "is_configured": true, 00:12:57.626 "data_offset": 0, 00:12:57.626 "data_size": 65536 00:12:57.626 }, 00:12:57.626 { 00:12:57.626 "name": "BaseBdev2", 00:12:57.626 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:57.626 "is_configured": true, 00:12:57.626 "data_offset": 0, 00:12:57.626 "data_size": 65536 00:12:57.626 } 00:12:57.626 ] 00:12:57.626 }' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.626 12:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.562 12:43:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.821 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.821 "name": "raid_bdev1", 00:12:58.821 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:58.821 "strip_size_kb": 0, 00:12:58.821 "state": "online", 00:12:58.821 "raid_level": "raid1", 00:12:58.821 "superblock": false, 00:12:58.821 "num_base_bdevs": 2, 00:12:58.821 "num_base_bdevs_discovered": 2, 00:12:58.821 "num_base_bdevs_operational": 2, 00:12:58.821 "process": { 00:12:58.821 "type": "rebuild", 00:12:58.821 "target": "spare", 00:12:58.821 "progress": { 00:12:58.821 "blocks": 47104, 00:12:58.821 "percent": 71 00:12:58.821 } 00:12:58.821 }, 00:12:58.821 "base_bdevs_list": [ 00:12:58.821 { 00:12:58.821 "name": "spare", 00:12:58.821 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:12:58.821 "is_configured": true, 00:12:58.821 "data_offset": 0, 00:12:58.821 "data_size": 65536 00:12:58.821 }, 00:12:58.821 { 00:12:58.821 "name": "BaseBdev2", 00:12:58.821 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:58.821 "is_configured": true, 00:12:58.821 "data_offset": 0, 00:12:58.821 "data_size": 65536 00:12:58.821 } 00:12:58.821 ] 00:12:58.821 }' 00:12:58.821 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.821 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.821 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.821 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.821 12:43:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.756 [2024-11-06 12:43:48.070286] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:59.756 [2024-11-06 12:43:48.070388] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:59.756 [2024-11-06 12:43:48.070455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.756 "name": "raid_bdev1", 00:12:59.756 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:12:59.756 "strip_size_kb": 0, 00:12:59.756 "state": "online", 00:12:59.756 "raid_level": "raid1", 00:12:59.756 "superblock": false, 00:12:59.756 "num_base_bdevs": 2, 00:12:59.756 "num_base_bdevs_discovered": 2, 00:12:59.756 "num_base_bdevs_operational": 2, 00:12:59.756 "base_bdevs_list": [ 00:12:59.756 { 00:12:59.756 "name": "spare", 00:12:59.756 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:12:59.756 "is_configured": true, 00:12:59.756 "data_offset": 0, 00:12:59.756 "data_size": 65536 00:12:59.756 }, 00:12:59.756 { 00:12:59.756 "name": "BaseBdev2", 00:12:59.756 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:12:59.756 "is_configured": true, 00:12:59.756 "data_offset": 0, 00:12:59.756 "data_size": 65536 00:12:59.756 } 00:12:59.756 ] 00:12:59.756 }' 00:12:59.756 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.014 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.015 "name": "raid_bdev1", 00:13:00.015 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:13:00.015 "strip_size_kb": 0, 00:13:00.015 "state": "online", 00:13:00.015 "raid_level": "raid1", 00:13:00.015 "superblock": false, 00:13:00.015 "num_base_bdevs": 2, 00:13:00.015 "num_base_bdevs_discovered": 2, 00:13:00.015 "num_base_bdevs_operational": 2, 00:13:00.015 "base_bdevs_list": [ 00:13:00.015 { 00:13:00.015 "name": "spare", 00:13:00.015 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:13:00.015 "is_configured": true, 00:13:00.015 "data_offset": 0, 00:13:00.015 "data_size": 65536 00:13:00.015 }, 00:13:00.015 { 00:13:00.015 "name": "BaseBdev2", 00:13:00.015 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:13:00.015 "is_configured": true, 00:13:00.015 "data_offset": 0, 00:13:00.015 "data_size": 65536 00:13:00.015 } 00:13:00.015 ] 00:13:00.015 }' 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.015 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.273 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.273 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.273 "name": "raid_bdev1", 00:13:00.273 "uuid": "84b9763b-afc1-462c-b7b0-e635863f8a96", 00:13:00.273 "strip_size_kb": 0, 00:13:00.273 "state": "online", 00:13:00.273 "raid_level": "raid1", 00:13:00.273 "superblock": false, 00:13:00.273 "num_base_bdevs": 2, 00:13:00.273 "num_base_bdevs_discovered": 2, 00:13:00.273 "num_base_bdevs_operational": 2, 00:13:00.273 "base_bdevs_list": [ 00:13:00.273 { 00:13:00.273 "name": "spare", 00:13:00.273 "uuid": "761f0cfc-6b59-5632-a514-f442872210a8", 00:13:00.273 "is_configured": true, 00:13:00.273 "data_offset": 0, 00:13:00.273 "data_size": 65536 00:13:00.273 }, 00:13:00.273 { 00:13:00.273 "name": "BaseBdev2", 00:13:00.273 "uuid": "165a806c-8e9f-5287-b124-3294486f00d1", 00:13:00.273 "is_configured": true, 00:13:00.273 "data_offset": 0, 00:13:00.273 "data_size": 65536 00:13:00.273 } 00:13:00.273 ] 00:13:00.273 }' 00:13:00.273 12:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.273 12:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.535 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.535 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.535 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.795 [2024-11-06 12:43:49.194167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.795 [2024-11-06 12:43:49.194243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.795 [2024-11-06 12:43:49.194349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.795 [2024-11-06 12:43:49.194440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.795 [2024-11-06 12:43:49.194457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.795 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:01.054 /dev/nbd0 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.054 1+0 records in 00:13:01.054 1+0 records out 00:13:01.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620517 s, 6.6 MB/s 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:01.054 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.055 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:01.055 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:01.055 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.055 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.055 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:01.312 /dev/nbd1 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.312 1+0 records in 00:13:01.312 1+0 records out 00:13:01.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037473 s, 10.9 MB/s 00:13:01.312 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.571 12:43:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.571 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:01.829 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.830 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75498 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75498 ']' 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75498 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.088 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75498 00:13:02.346 killing process with pid 75498 00:13:02.346 Received shutdown signal, test time was about 60.000000 seconds 00:13:02.346 00:13:02.346 Latency(us) 00:13:02.346 [2024-11-06T12:43:51.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.346 [2024-11-06T12:43:51.003Z] =================================================================================================================== 00:13:02.346 [2024-11-06T12:43:51.003Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.346 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:02.346 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:02.346 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75498' 00:13:02.346 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75498 00:13:02.347 12:43:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75498 00:13:02.347 [2024-11-06 12:43:50.760762] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.605 [2024-11-06 12:43:51.041916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:03.541 00:13:03.541 real 0m19.008s 00:13:03.541 user 0m21.368s 00:13:03.541 sys 0m3.678s 00:13:03.541 ************************************ 00:13:03.541 END TEST raid_rebuild_test 00:13:03.541 ************************************ 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.541 12:43:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:03.541 12:43:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:03.541 12:43:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.541 12:43:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.541 ************************************ 00:13:03.541 START TEST raid_rebuild_test_sb 00:13:03.541 ************************************ 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75949 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75949 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75949 ']' 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.541 12:43:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.800 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.800 Zero copy mechanism will not be used. 00:13:03.800 [2024-11-06 12:43:52.259312] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:13:03.800 [2024-11-06 12:43:52.259494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75949 ] 00:13:03.800 [2024-11-06 12:43:52.443969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.060 [2024-11-06 12:43:52.603048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.318 [2024-11-06 12:43:52.810625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.318 [2024-11-06 12:43:52.810704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 BaseBdev1_malloc 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 [2024-11-06 12:43:53.341648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:04.885 [2024-11-06 12:43:53.341744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.885 [2024-11-06 12:43:53.341777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.885 [2024-11-06 12:43:53.341795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.885 [2024-11-06 12:43:53.344592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.885 [2024-11-06 12:43:53.344642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.885 BaseBdev1 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 BaseBdev2_malloc 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 [2024-11-06 12:43:53.398147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:04.885 [2024-11-06 12:43:53.398288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.885 [2024-11-06 12:43:53.398321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:04.885 [2024-11-06 12:43:53.398343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.885 [2024-11-06 12:43:53.401304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.885 [2024-11-06 12:43:53.401352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.885 BaseBdev2 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 spare_malloc 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 spare_delay 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.885 [2024-11-06 12:43:53.482273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.885 [2024-11-06 12:43:53.482366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.885 [2024-11-06 12:43:53.482393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:04.885 [2024-11-06 12:43:53.482409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.885 [2024-11-06 12:43:53.485090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.885 [2024-11-06 12:43:53.485352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.885 spare 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:04.885 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.886 [2024-11-06 12:43:53.494343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.886 [2024-11-06 12:43:53.496669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.886 [2024-11-06 12:43:53.496879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:04.886 [2024-11-06 12:43:53.496902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.886 [2024-11-06 12:43:53.497199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:04.886 [2024-11-06 12:43:53.497406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:04.886 [2024-11-06 12:43:53.497421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:04.886 [2024-11-06 12:43:53.497581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.886 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.144 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.144 "name": "raid_bdev1", 00:13:05.144 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:05.144 "strip_size_kb": 0, 00:13:05.144 "state": "online", 00:13:05.144 "raid_level": "raid1", 00:13:05.144 "superblock": true, 00:13:05.144 "num_base_bdevs": 2, 00:13:05.144 "num_base_bdevs_discovered": 2, 00:13:05.144 "num_base_bdevs_operational": 2, 00:13:05.144 "base_bdevs_list": [ 00:13:05.144 { 00:13:05.144 "name": "BaseBdev1", 00:13:05.144 "uuid": "ef499b00-0c9a-5748-b322-5f6aff4cf2eb", 00:13:05.144 "is_configured": true, 00:13:05.144 "data_offset": 2048, 00:13:05.144 "data_size": 63488 00:13:05.144 }, 00:13:05.144 { 00:13:05.144 "name": "BaseBdev2", 00:13:05.144 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:05.144 "is_configured": true, 00:13:05.144 "data_offset": 2048, 00:13:05.144 "data_size": 63488 00:13:05.144 } 00:13:05.144 ] 00:13:05.144 }' 00:13:05.144 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.144 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.402 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:05.402 12:43:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.402 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.402 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.402 [2024-11-06 12:43:53.978905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.402 12:43:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.402 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:05.402 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.402 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:05.402 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.402 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.402 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.661 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:05.920 [2024-11-06 12:43:54.322761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:05.920 /dev/nbd0 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.920 1+0 records in 00:13:05.920 1+0 records out 00:13:05.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325447 s, 12.6 MB/s 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:05.920 12:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:12.482 63488+0 records in 00:13:12.482 63488+0 records out 00:13:12.482 32505856 bytes (33 MB, 31 MiB) copied, 6.48034 s, 5.0 MB/s 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.482 12:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.740 [2024-11-06 12:44:01.145938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.740 [2024-11-06 12:44:01.162670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.740 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.740 "name": "raid_bdev1", 00:13:12.741 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:12.741 "strip_size_kb": 0, 00:13:12.741 "state": "online", 00:13:12.741 "raid_level": "raid1", 00:13:12.741 "superblock": true, 00:13:12.741 "num_base_bdevs": 2, 00:13:12.741 "num_base_bdevs_discovered": 1, 00:13:12.741 "num_base_bdevs_operational": 1, 00:13:12.741 "base_bdevs_list": [ 00:13:12.741 { 00:13:12.741 "name": null, 00:13:12.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.741 "is_configured": false, 00:13:12.741 "data_offset": 0, 00:13:12.741 "data_size": 63488 00:13:12.741 }, 00:13:12.741 { 00:13:12.741 "name": "BaseBdev2", 00:13:12.741 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:12.741 "is_configured": true, 00:13:12.741 "data_offset": 2048, 00:13:12.741 "data_size": 63488 00:13:12.741 } 00:13:12.741 ] 00:13:12.741 }' 00:13:12.741 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.741 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.307 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.307 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.307 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.307 [2024-11-06 12:44:01.670861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.307 [2024-11-06 12:44:01.687182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:13.307 12:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.307 12:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:13.307 [2024-11-06 12:44:01.689689] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.244 "name": "raid_bdev1", 00:13:14.244 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:14.244 "strip_size_kb": 0, 00:13:14.244 "state": "online", 00:13:14.244 "raid_level": "raid1", 00:13:14.244 "superblock": true, 00:13:14.244 "num_base_bdevs": 2, 00:13:14.244 "num_base_bdevs_discovered": 2, 00:13:14.244 "num_base_bdevs_operational": 2, 00:13:14.244 "process": { 00:13:14.244 "type": "rebuild", 00:13:14.244 "target": "spare", 00:13:14.244 "progress": { 00:13:14.244 "blocks": 20480, 00:13:14.244 "percent": 32 00:13:14.244 } 00:13:14.244 }, 00:13:14.244 "base_bdevs_list": [ 00:13:14.244 { 00:13:14.244 "name": "spare", 00:13:14.244 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:14.244 "is_configured": true, 00:13:14.244 "data_offset": 2048, 00:13:14.244 "data_size": 63488 00:13:14.244 }, 00:13:14.244 { 00:13:14.244 "name": "BaseBdev2", 00:13:14.244 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:14.244 "is_configured": true, 00:13:14.244 "data_offset": 2048, 00:13:14.244 "data_size": 63488 00:13:14.244 } 00:13:14.244 ] 00:13:14.244 }' 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.244 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.244 [2024-11-06 12:44:02.854636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.244 [2024-11-06 12:44:02.898598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.244 [2024-11-06 12:44:02.898720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.244 [2024-11-06 12:44:02.898746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.244 [2024-11-06 12:44:02.898761] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.503 "name": "raid_bdev1", 00:13:14.503 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:14.503 "strip_size_kb": 0, 00:13:14.503 "state": "online", 00:13:14.503 "raid_level": "raid1", 00:13:14.503 "superblock": true, 00:13:14.503 "num_base_bdevs": 2, 00:13:14.503 "num_base_bdevs_discovered": 1, 00:13:14.503 "num_base_bdevs_operational": 1, 00:13:14.503 "base_bdevs_list": [ 00:13:14.503 { 00:13:14.503 "name": null, 00:13:14.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.503 "is_configured": false, 00:13:14.503 "data_offset": 0, 00:13:14.503 "data_size": 63488 00:13:14.503 }, 00:13:14.503 { 00:13:14.503 "name": "BaseBdev2", 00:13:14.503 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:14.503 "is_configured": true, 00:13:14.503 "data_offset": 2048, 00:13:14.503 "data_size": 63488 00:13:14.503 } 00:13:14.503 ] 00:13:14.503 }' 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.503 12:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.071 "name": "raid_bdev1", 00:13:15.071 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:15.071 "strip_size_kb": 0, 00:13:15.071 "state": "online", 00:13:15.071 "raid_level": "raid1", 00:13:15.071 "superblock": true, 00:13:15.071 "num_base_bdevs": 2, 00:13:15.071 "num_base_bdevs_discovered": 1, 00:13:15.071 "num_base_bdevs_operational": 1, 00:13:15.071 "base_bdevs_list": [ 00:13:15.071 { 00:13:15.071 "name": null, 00:13:15.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.071 "is_configured": false, 00:13:15.071 "data_offset": 0, 00:13:15.071 "data_size": 63488 00:13:15.071 }, 00:13:15.071 { 00:13:15.071 "name": "BaseBdev2", 00:13:15.071 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:15.071 "is_configured": true, 00:13:15.071 "data_offset": 2048, 00:13:15.071 "data_size": 63488 00:13:15.071 } 00:13:15.071 ] 00:13:15.071 }' 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.071 [2024-11-06 12:44:03.630930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.071 [2024-11-06 12:44:03.646454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.071 12:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:15.071 [2024-11-06 12:44:03.648919] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.007 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.265 "name": "raid_bdev1", 00:13:16.265 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:16.265 "strip_size_kb": 0, 00:13:16.265 "state": "online", 00:13:16.265 "raid_level": "raid1", 00:13:16.265 "superblock": true, 00:13:16.265 "num_base_bdevs": 2, 00:13:16.265 "num_base_bdevs_discovered": 2, 00:13:16.265 "num_base_bdevs_operational": 2, 00:13:16.265 "process": { 00:13:16.265 "type": "rebuild", 00:13:16.265 "target": "spare", 00:13:16.265 "progress": { 00:13:16.265 "blocks": 20480, 00:13:16.265 "percent": 32 00:13:16.265 } 00:13:16.265 }, 00:13:16.265 "base_bdevs_list": [ 00:13:16.265 { 00:13:16.265 "name": "spare", 00:13:16.265 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:16.265 "is_configured": true, 00:13:16.265 "data_offset": 2048, 00:13:16.265 "data_size": 63488 00:13:16.265 }, 00:13:16.265 { 00:13:16.265 "name": "BaseBdev2", 00:13:16.265 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:16.265 "is_configured": true, 00:13:16.265 "data_offset": 2048, 00:13:16.265 "data_size": 63488 00:13:16.265 } 00:13:16.265 ] 00:13:16.265 }' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:16.265 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.265 "name": "raid_bdev1", 00:13:16.265 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:16.265 "strip_size_kb": 0, 00:13:16.265 "state": "online", 00:13:16.265 "raid_level": "raid1", 00:13:16.265 "superblock": true, 00:13:16.265 "num_base_bdevs": 2, 00:13:16.265 "num_base_bdevs_discovered": 2, 00:13:16.265 "num_base_bdevs_operational": 2, 00:13:16.265 "process": { 00:13:16.265 "type": "rebuild", 00:13:16.265 "target": "spare", 00:13:16.265 "progress": { 00:13:16.265 "blocks": 22528, 00:13:16.265 "percent": 35 00:13:16.265 } 00:13:16.265 }, 00:13:16.265 "base_bdevs_list": [ 00:13:16.265 { 00:13:16.265 "name": "spare", 00:13:16.265 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:16.265 "is_configured": true, 00:13:16.265 "data_offset": 2048, 00:13:16.265 "data_size": 63488 00:13:16.265 }, 00:13:16.265 { 00:13:16.265 "name": "BaseBdev2", 00:13:16.265 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:16.265 "is_configured": true, 00:13:16.265 "data_offset": 2048, 00:13:16.265 "data_size": 63488 00:13:16.265 } 00:13:16.265 ] 00:13:16.265 }' 00:13:16.265 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.523 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.523 12:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.523 12:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.523 12:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.485 "name": "raid_bdev1", 00:13:17.485 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:17.485 "strip_size_kb": 0, 00:13:17.485 "state": "online", 00:13:17.485 "raid_level": "raid1", 00:13:17.485 "superblock": true, 00:13:17.485 "num_base_bdevs": 2, 00:13:17.485 "num_base_bdevs_discovered": 2, 00:13:17.485 "num_base_bdevs_operational": 2, 00:13:17.485 "process": { 00:13:17.485 "type": "rebuild", 00:13:17.485 "target": "spare", 00:13:17.485 "progress": { 00:13:17.485 "blocks": 47104, 00:13:17.485 "percent": 74 00:13:17.485 } 00:13:17.485 }, 00:13:17.485 "base_bdevs_list": [ 00:13:17.485 { 00:13:17.485 "name": "spare", 00:13:17.485 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:17.485 "is_configured": true, 00:13:17.485 "data_offset": 2048, 00:13:17.485 "data_size": 63488 00:13:17.485 }, 00:13:17.485 { 00:13:17.485 "name": "BaseBdev2", 00:13:17.485 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:17.485 "is_configured": true, 00:13:17.485 "data_offset": 2048, 00:13:17.485 "data_size": 63488 00:13:17.485 } 00:13:17.485 ] 00:13:17.485 }' 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.485 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.743 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.743 12:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.310 [2024-11-06 12:44:06.771360] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:18.310 [2024-11-06 12:44:06.771467] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:18.310 [2024-11-06 12:44:06.771628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.567 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.825 "name": "raid_bdev1", 00:13:18.825 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:18.825 "strip_size_kb": 0, 00:13:18.825 "state": "online", 00:13:18.825 "raid_level": "raid1", 00:13:18.825 "superblock": true, 00:13:18.825 "num_base_bdevs": 2, 00:13:18.825 "num_base_bdevs_discovered": 2, 00:13:18.825 "num_base_bdevs_operational": 2, 00:13:18.825 "base_bdevs_list": [ 00:13:18.825 { 00:13:18.825 "name": "spare", 00:13:18.825 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:18.825 "is_configured": true, 00:13:18.825 "data_offset": 2048, 00:13:18.825 "data_size": 63488 00:13:18.825 }, 00:13:18.825 { 00:13:18.825 "name": "BaseBdev2", 00:13:18.825 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:18.825 "is_configured": true, 00:13:18.825 "data_offset": 2048, 00:13:18.825 "data_size": 63488 00:13:18.825 } 00:13:18.825 ] 00:13:18.825 }' 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.825 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.825 "name": "raid_bdev1", 00:13:18.825 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:18.825 "strip_size_kb": 0, 00:13:18.825 "state": "online", 00:13:18.825 "raid_level": "raid1", 00:13:18.825 "superblock": true, 00:13:18.825 "num_base_bdevs": 2, 00:13:18.825 "num_base_bdevs_discovered": 2, 00:13:18.826 "num_base_bdevs_operational": 2, 00:13:18.826 "base_bdevs_list": [ 00:13:18.826 { 00:13:18.826 "name": "spare", 00:13:18.826 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:18.826 "is_configured": true, 00:13:18.826 "data_offset": 2048, 00:13:18.826 "data_size": 63488 00:13:18.826 }, 00:13:18.826 { 00:13:18.826 "name": "BaseBdev2", 00:13:18.826 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:18.826 "is_configured": true, 00:13:18.826 "data_offset": 2048, 00:13:18.826 "data_size": 63488 00:13:18.826 } 00:13:18.826 ] 00:13:18.826 }' 00:13:18.826 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.826 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.826 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.084 "name": "raid_bdev1", 00:13:19.084 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:19.084 "strip_size_kb": 0, 00:13:19.084 "state": "online", 00:13:19.084 "raid_level": "raid1", 00:13:19.084 "superblock": true, 00:13:19.084 "num_base_bdevs": 2, 00:13:19.084 "num_base_bdevs_discovered": 2, 00:13:19.084 "num_base_bdevs_operational": 2, 00:13:19.084 "base_bdevs_list": [ 00:13:19.084 { 00:13:19.084 "name": "spare", 00:13:19.084 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:19.084 "is_configured": true, 00:13:19.084 "data_offset": 2048, 00:13:19.084 "data_size": 63488 00:13:19.084 }, 00:13:19.084 { 00:13:19.084 "name": "BaseBdev2", 00:13:19.084 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:19.084 "is_configured": true, 00:13:19.084 "data_offset": 2048, 00:13:19.084 "data_size": 63488 00:13:19.084 } 00:13:19.084 ] 00:13:19.084 }' 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.084 12:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.651 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.651 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.651 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.651 [2024-11-06 12:44:08.071273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.652 [2024-11-06 12:44:08.071545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.652 [2024-11-06 12:44:08.071669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.652 [2024-11-06 12:44:08.071767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.652 [2024-11-06 12:44:08.071789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:19.652 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:19.909 /dev/nbd0 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.909 1+0 records in 00:13:19.909 1+0 records out 00:13:19.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255211 s, 16.0 MB/s 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:19.909 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:20.166 /dev/nbd1 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.166 1+0 records in 00:13:20.166 1+0 records out 00:13:20.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046039 s, 8.9 MB/s 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:20.166 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.424 12:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.682 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.941 [2024-11-06 12:44:09.589553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.941 [2024-11-06 12:44:09.589641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.941 [2024-11-06 12:44:09.589675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:20.941 [2024-11-06 12:44:09.589691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.941 [2024-11-06 12:44:09.592592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.941 [2024-11-06 12:44:09.592639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.941 [2024-11-06 12:44:09.592761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:20.941 [2024-11-06 12:44:09.592824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.941 [2024-11-06 12:44:09.593020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.941 spare 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.941 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.198 [2024-11-06 12:44:09.693155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:21.198 [2024-11-06 12:44:09.693247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:21.198 [2024-11-06 12:44:09.693696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:21.198 [2024-11-06 12:44:09.693966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:21.198 [2024-11-06 12:44:09.693983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:21.198 [2024-11-06 12:44:09.694277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.198 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.198 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.198 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.198 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.198 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.199 "name": "raid_bdev1", 00:13:21.199 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:21.199 "strip_size_kb": 0, 00:13:21.199 "state": "online", 00:13:21.199 "raid_level": "raid1", 00:13:21.199 "superblock": true, 00:13:21.199 "num_base_bdevs": 2, 00:13:21.199 "num_base_bdevs_discovered": 2, 00:13:21.199 "num_base_bdevs_operational": 2, 00:13:21.199 "base_bdevs_list": [ 00:13:21.199 { 00:13:21.199 "name": "spare", 00:13:21.199 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:21.199 "is_configured": true, 00:13:21.199 "data_offset": 2048, 00:13:21.199 "data_size": 63488 00:13:21.199 }, 00:13:21.199 { 00:13:21.199 "name": "BaseBdev2", 00:13:21.199 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:21.199 "is_configured": true, 00:13:21.199 "data_offset": 2048, 00:13:21.199 "data_size": 63488 00:13:21.199 } 00:13:21.199 ] 00:13:21.199 }' 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.199 12:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.766 "name": "raid_bdev1", 00:13:21.766 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:21.766 "strip_size_kb": 0, 00:13:21.766 "state": "online", 00:13:21.766 "raid_level": "raid1", 00:13:21.766 "superblock": true, 00:13:21.766 "num_base_bdevs": 2, 00:13:21.766 "num_base_bdevs_discovered": 2, 00:13:21.766 "num_base_bdevs_operational": 2, 00:13:21.766 "base_bdevs_list": [ 00:13:21.766 { 00:13:21.766 "name": "spare", 00:13:21.766 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:21.766 "is_configured": true, 00:13:21.766 "data_offset": 2048, 00:13:21.766 "data_size": 63488 00:13:21.766 }, 00:13:21.766 { 00:13:21.766 "name": "BaseBdev2", 00:13:21.766 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:21.766 "is_configured": true, 00:13:21.766 "data_offset": 2048, 00:13:21.766 "data_size": 63488 00:13:21.766 } 00:13:21.766 ] 00:13:21.766 }' 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.766 [2024-11-06 12:44:10.414420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.766 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.024 "name": "raid_bdev1", 00:13:22.024 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:22.024 "strip_size_kb": 0, 00:13:22.024 "state": "online", 00:13:22.024 "raid_level": "raid1", 00:13:22.024 "superblock": true, 00:13:22.024 "num_base_bdevs": 2, 00:13:22.024 "num_base_bdevs_discovered": 1, 00:13:22.024 "num_base_bdevs_operational": 1, 00:13:22.024 "base_bdevs_list": [ 00:13:22.024 { 00:13:22.024 "name": null, 00:13:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.024 "is_configured": false, 00:13:22.024 "data_offset": 0, 00:13:22.024 "data_size": 63488 00:13:22.024 }, 00:13:22.024 { 00:13:22.024 "name": "BaseBdev2", 00:13:22.024 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:22.024 "is_configured": true, 00:13:22.024 "data_offset": 2048, 00:13:22.024 "data_size": 63488 00:13:22.024 } 00:13:22.024 ] 00:13:22.024 }' 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.024 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.625 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.625 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.625 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.625 [2024-11-06 12:44:10.958605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.625 [2024-11-06 12:44:10.958844] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.625 [2024-11-06 12:44:10.958875] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:22.625 [2024-11-06 12:44:10.958930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.625 [2024-11-06 12:44:10.974358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:22.625 [2024-11-06 12:44:10.976806] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.625 12:44:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.625 12:44:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.558 12:44:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.558 "name": "raid_bdev1", 00:13:23.558 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:23.558 "strip_size_kb": 0, 00:13:23.558 "state": "online", 00:13:23.558 "raid_level": "raid1", 00:13:23.558 "superblock": true, 00:13:23.558 "num_base_bdevs": 2, 00:13:23.558 "num_base_bdevs_discovered": 2, 00:13:23.558 "num_base_bdevs_operational": 2, 00:13:23.558 "process": { 00:13:23.558 "type": "rebuild", 00:13:23.558 "target": "spare", 00:13:23.558 "progress": { 00:13:23.558 "blocks": 20480, 00:13:23.558 "percent": 32 00:13:23.558 } 00:13:23.558 }, 00:13:23.558 "base_bdevs_list": [ 00:13:23.558 { 00:13:23.558 "name": "spare", 00:13:23.558 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:23.558 "is_configured": true, 00:13:23.558 "data_offset": 2048, 00:13:23.558 "data_size": 63488 00:13:23.558 }, 00:13:23.558 { 00:13:23.558 "name": "BaseBdev2", 00:13:23.558 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:23.558 "is_configured": true, 00:13:23.558 "data_offset": 2048, 00:13:23.558 "data_size": 63488 00:13:23.558 } 00:13:23.558 ] 00:13:23.558 }' 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.558 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.558 [2024-11-06 12:44:12.146514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.558 [2024-11-06 12:44:12.185714] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.558 [2024-11-06 12:44:12.185834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.558 [2024-11-06 12:44:12.185858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.558 [2024-11-06 12:44:12.185873] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.817 "name": "raid_bdev1", 00:13:23.817 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:23.817 "strip_size_kb": 0, 00:13:23.817 "state": "online", 00:13:23.817 "raid_level": "raid1", 00:13:23.817 "superblock": true, 00:13:23.817 "num_base_bdevs": 2, 00:13:23.817 "num_base_bdevs_discovered": 1, 00:13:23.817 "num_base_bdevs_operational": 1, 00:13:23.817 "base_bdevs_list": [ 00:13:23.817 { 00:13:23.817 "name": null, 00:13:23.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.817 "is_configured": false, 00:13:23.817 "data_offset": 0, 00:13:23.817 "data_size": 63488 00:13:23.817 }, 00:13:23.817 { 00:13:23.817 "name": "BaseBdev2", 00:13:23.817 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:23.817 "is_configured": true, 00:13:23.817 "data_offset": 2048, 00:13:23.817 "data_size": 63488 00:13:23.817 } 00:13:23.817 ] 00:13:23.817 }' 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.817 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.383 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.383 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.383 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.383 [2024-11-06 12:44:12.750585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.383 [2024-11-06 12:44:12.750815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.383 [2024-11-06 12:44:12.750891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:24.383 [2024-11-06 12:44:12.751019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.383 [2024-11-06 12:44:12.751670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.383 [2024-11-06 12:44:12.751713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.383 [2024-11-06 12:44:12.751830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:24.383 [2024-11-06 12:44:12.751856] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:24.383 [2024-11-06 12:44:12.751869] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:24.383 [2024-11-06 12:44:12.751902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.383 [2024-11-06 12:44:12.767645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:24.383 spare 00:13:24.383 12:44:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.383 12:44:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:24.384 [2024-11-06 12:44:12.770769] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.347 "name": "raid_bdev1", 00:13:25.347 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:25.347 "strip_size_kb": 0, 00:13:25.347 "state": "online", 00:13:25.347 "raid_level": "raid1", 00:13:25.347 "superblock": true, 00:13:25.347 "num_base_bdevs": 2, 00:13:25.347 "num_base_bdevs_discovered": 2, 00:13:25.347 "num_base_bdevs_operational": 2, 00:13:25.347 "process": { 00:13:25.347 "type": "rebuild", 00:13:25.347 "target": "spare", 00:13:25.347 "progress": { 00:13:25.347 "blocks": 20480, 00:13:25.347 "percent": 32 00:13:25.347 } 00:13:25.347 }, 00:13:25.347 "base_bdevs_list": [ 00:13:25.347 { 00:13:25.347 "name": "spare", 00:13:25.347 "uuid": "7d5e9931-a281-517e-8a35-8a25f1e1778b", 00:13:25.347 "is_configured": true, 00:13:25.347 "data_offset": 2048, 00:13:25.347 "data_size": 63488 00:13:25.347 }, 00:13:25.347 { 00:13:25.347 "name": "BaseBdev2", 00:13:25.347 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:25.347 "is_configured": true, 00:13:25.347 "data_offset": 2048, 00:13:25.347 "data_size": 63488 00:13:25.347 } 00:13:25.347 ] 00:13:25.347 }' 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.347 12:44:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.347 [2024-11-06 12:44:13.959989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.347 [2024-11-06 12:44:13.979915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:25.347 [2024-11-06 12:44:13.980209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.347 [2024-11-06 12:44:13.980246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.347 [2024-11-06 12:44:13.980260] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.606 "name": "raid_bdev1", 00:13:25.606 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:25.606 "strip_size_kb": 0, 00:13:25.606 "state": "online", 00:13:25.606 "raid_level": "raid1", 00:13:25.606 "superblock": true, 00:13:25.606 "num_base_bdevs": 2, 00:13:25.606 "num_base_bdevs_discovered": 1, 00:13:25.606 "num_base_bdevs_operational": 1, 00:13:25.606 "base_bdevs_list": [ 00:13:25.606 { 00:13:25.606 "name": null, 00:13:25.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.606 "is_configured": false, 00:13:25.606 "data_offset": 0, 00:13:25.606 "data_size": 63488 00:13:25.606 }, 00:13:25.606 { 00:13:25.606 "name": "BaseBdev2", 00:13:25.606 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:25.606 "is_configured": true, 00:13:25.606 "data_offset": 2048, 00:13:25.606 "data_size": 63488 00:13:25.606 } 00:13:25.606 ] 00:13:25.606 }' 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.606 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.173 "name": "raid_bdev1", 00:13:26.173 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:26.173 "strip_size_kb": 0, 00:13:26.173 "state": "online", 00:13:26.173 "raid_level": "raid1", 00:13:26.173 "superblock": true, 00:13:26.173 "num_base_bdevs": 2, 00:13:26.173 "num_base_bdevs_discovered": 1, 00:13:26.173 "num_base_bdevs_operational": 1, 00:13:26.173 "base_bdevs_list": [ 00:13:26.173 { 00:13:26.173 "name": null, 00:13:26.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.173 "is_configured": false, 00:13:26.173 "data_offset": 0, 00:13:26.173 "data_size": 63488 00:13:26.173 }, 00:13:26.173 { 00:13:26.173 "name": "BaseBdev2", 00:13:26.173 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:26.173 "is_configured": true, 00:13:26.173 "data_offset": 2048, 00:13:26.173 "data_size": 63488 00:13:26.173 } 00:13:26.173 ] 00:13:26.173 }' 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.173 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.174 [2024-11-06 12:44:14.756229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:26.174 [2024-11-06 12:44:14.756429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.174 [2024-11-06 12:44:14.756508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:26.174 [2024-11-06 12:44:14.756667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.174 [2024-11-06 12:44:14.757264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.174 [2024-11-06 12:44:14.757290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:26.174 [2024-11-06 12:44:14.757397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:26.174 [2024-11-06 12:44:14.757419] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:26.174 [2024-11-06 12:44:14.757435] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:26.174 [2024-11-06 12:44:14.757448] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:26.174 BaseBdev1 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.174 12:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.549 "name": "raid_bdev1", 00:13:27.549 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:27.549 "strip_size_kb": 0, 00:13:27.549 "state": "online", 00:13:27.549 "raid_level": "raid1", 00:13:27.549 "superblock": true, 00:13:27.549 "num_base_bdevs": 2, 00:13:27.549 "num_base_bdevs_discovered": 1, 00:13:27.549 "num_base_bdevs_operational": 1, 00:13:27.549 "base_bdevs_list": [ 00:13:27.549 { 00:13:27.549 "name": null, 00:13:27.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.549 "is_configured": false, 00:13:27.549 "data_offset": 0, 00:13:27.549 "data_size": 63488 00:13:27.549 }, 00:13:27.549 { 00:13:27.549 "name": "BaseBdev2", 00:13:27.549 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:27.549 "is_configured": true, 00:13:27.549 "data_offset": 2048, 00:13:27.549 "data_size": 63488 00:13:27.549 } 00:13:27.549 ] 00:13:27.549 }' 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.549 12:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.807 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.807 "name": "raid_bdev1", 00:13:27.807 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:27.807 "strip_size_kb": 0, 00:13:27.807 "state": "online", 00:13:27.807 "raid_level": "raid1", 00:13:27.807 "superblock": true, 00:13:27.807 "num_base_bdevs": 2, 00:13:27.807 "num_base_bdevs_discovered": 1, 00:13:27.807 "num_base_bdevs_operational": 1, 00:13:27.807 "base_bdevs_list": [ 00:13:27.807 { 00:13:27.807 "name": null, 00:13:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.807 "is_configured": false, 00:13:27.807 "data_offset": 0, 00:13:27.807 "data_size": 63488 00:13:27.807 }, 00:13:27.807 { 00:13:27.807 "name": "BaseBdev2", 00:13:27.808 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:27.808 "is_configured": true, 00:13:27.808 "data_offset": 2048, 00:13:27.808 "data_size": 63488 00:13:27.808 } 00:13:27.808 ] 00:13:27.808 }' 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.808 [2024-11-06 12:44:16.448781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.808 [2024-11-06 12:44:16.449126] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:27.808 [2024-11-06 12:44:16.449158] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:27.808 request: 00:13:27.808 { 00:13:27.808 "base_bdev": "BaseBdev1", 00:13:27.808 "raid_bdev": "raid_bdev1", 00:13:27.808 "method": "bdev_raid_add_base_bdev", 00:13:27.808 "req_id": 1 00:13:27.808 } 00:13:27.808 Got JSON-RPC error response 00:13:27.808 response: 00:13:27.808 { 00:13:27.808 "code": -22, 00:13:27.808 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:27.808 } 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.808 12:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.184 "name": "raid_bdev1", 00:13:29.184 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:29.184 "strip_size_kb": 0, 00:13:29.184 "state": "online", 00:13:29.184 "raid_level": "raid1", 00:13:29.184 "superblock": true, 00:13:29.184 "num_base_bdevs": 2, 00:13:29.184 "num_base_bdevs_discovered": 1, 00:13:29.184 "num_base_bdevs_operational": 1, 00:13:29.184 "base_bdevs_list": [ 00:13:29.184 { 00:13:29.184 "name": null, 00:13:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.184 "is_configured": false, 00:13:29.184 "data_offset": 0, 00:13:29.184 "data_size": 63488 00:13:29.184 }, 00:13:29.184 { 00:13:29.184 "name": "BaseBdev2", 00:13:29.184 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:29.184 "is_configured": true, 00:13:29.184 "data_offset": 2048, 00:13:29.184 "data_size": 63488 00:13:29.184 } 00:13:29.184 ] 00:13:29.184 }' 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.184 12:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.474 "name": "raid_bdev1", 00:13:29.474 "uuid": "7ff18677-8b05-4c51-b4e2-681e8ee8edaa", 00:13:29.474 "strip_size_kb": 0, 00:13:29.474 "state": "online", 00:13:29.474 "raid_level": "raid1", 00:13:29.474 "superblock": true, 00:13:29.474 "num_base_bdevs": 2, 00:13:29.474 "num_base_bdevs_discovered": 1, 00:13:29.474 "num_base_bdevs_operational": 1, 00:13:29.474 "base_bdevs_list": [ 00:13:29.474 { 00:13:29.474 "name": null, 00:13:29.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.474 "is_configured": false, 00:13:29.474 "data_offset": 0, 00:13:29.474 "data_size": 63488 00:13:29.474 }, 00:13:29.474 { 00:13:29.474 "name": "BaseBdev2", 00:13:29.474 "uuid": "23dd18ab-4983-5aa7-b96d-17a58515061a", 00:13:29.474 "is_configured": true, 00:13:29.474 "data_offset": 2048, 00:13:29.474 "data_size": 63488 00:13:29.474 } 00:13:29.474 ] 00:13:29.474 }' 00:13:29.474 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.733 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.733 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.733 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.733 12:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75949 00:13:29.733 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75949 ']' 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75949 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75949 00:13:29.734 killing process with pid 75949 00:13:29.734 Received shutdown signal, test time was about 60.000000 seconds 00:13:29.734 00:13:29.734 Latency(us) 00:13:29.734 [2024-11-06T12:44:18.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.734 [2024-11-06T12:44:18.391Z] =================================================================================================================== 00:13:29.734 [2024-11-06T12:44:18.391Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75949' 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75949 00:13:29.734 [2024-11-06 12:44:18.204434] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.734 12:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75949 00:13:29.734 [2024-11-06 12:44:18.204591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.734 [2024-11-06 12:44:18.204660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.734 [2024-11-06 12:44:18.204680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:29.992 [2024-11-06 12:44:18.492795] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.927 12:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:30.927 00:13:30.927 real 0m27.384s 00:13:30.927 user 0m33.905s 00:13:30.927 sys 0m4.276s 00:13:30.927 12:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.927 12:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.927 ************************************ 00:13:30.927 END TEST raid_rebuild_test_sb 00:13:30.927 ************************************ 00:13:30.927 12:44:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:30.927 12:44:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:30.927 12:44:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:30.927 12:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.185 ************************************ 00:13:31.185 START TEST raid_rebuild_test_io 00:13:31.185 ************************************ 00:13:31.185 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76718 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76718 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76718 ']' 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:31.186 12:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.186 [2024-11-06 12:44:19.705656] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:13:31.186 [2024-11-06 12:44:19.706490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.186 Zero copy mechanism will not be used. 00:13:31.186 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76718 ] 00:13:31.444 [2024-11-06 12:44:19.907789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.444 [2024-11-06 12:44:20.041480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.701 [2024-11-06 12:44:20.276274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.701 [2024-11-06 12:44:20.276354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 BaseBdev1_malloc 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 [2024-11-06 12:44:20.731103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.283 [2024-11-06 12:44:20.731181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.283 [2024-11-06 12:44:20.731231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.283 [2024-11-06 12:44:20.731252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.283 [2024-11-06 12:44:20.734049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.283 [2024-11-06 12:44:20.734092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.283 BaseBdev1 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 BaseBdev2_malloc 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 [2024-11-06 12:44:20.783187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:32.283 [2024-11-06 12:44:20.783275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.283 [2024-11-06 12:44:20.783304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.283 [2024-11-06 12:44:20.783333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.283 [2024-11-06 12:44:20.786030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.283 [2024-11-06 12:44:20.786073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:32.283 BaseBdev2 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 spare_malloc 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 spare_delay 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 [2024-11-06 12:44:20.857279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.283 [2024-11-06 12:44:20.857355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.283 [2024-11-06 12:44:20.857385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:32.283 [2024-11-06 12:44:20.857409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.283 [2024-11-06 12:44:20.860204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.283 [2024-11-06 12:44:20.860249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.283 spare 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.283 [2024-11-06 12:44:20.865342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.283 [2024-11-06 12:44:20.867736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.283 [2024-11-06 12:44:20.867874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:32.283 [2024-11-06 12:44:20.867897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:32.283 [2024-11-06 12:44:20.868256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:32.283 [2024-11-06 12:44:20.868463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:32.283 [2024-11-06 12:44:20.868481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:32.283 [2024-11-06 12:44:20.868676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.283 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.284 "name": "raid_bdev1", 00:13:32.284 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:32.284 "strip_size_kb": 0, 00:13:32.284 "state": "online", 00:13:32.284 "raid_level": "raid1", 00:13:32.284 "superblock": false, 00:13:32.284 "num_base_bdevs": 2, 00:13:32.284 "num_base_bdevs_discovered": 2, 00:13:32.284 "num_base_bdevs_operational": 2, 00:13:32.284 "base_bdevs_list": [ 00:13:32.284 { 00:13:32.284 "name": "BaseBdev1", 00:13:32.284 "uuid": "7fcca346-29c3-5b4b-b4ed-a8c7d2cb3b6f", 00:13:32.284 "is_configured": true, 00:13:32.284 "data_offset": 0, 00:13:32.284 "data_size": 65536 00:13:32.284 }, 00:13:32.284 { 00:13:32.284 "name": "BaseBdev2", 00:13:32.284 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:32.284 "is_configured": true, 00:13:32.284 "data_offset": 0, 00:13:32.284 "data_size": 65536 00:13:32.284 } 00:13:32.284 ] 00:13:32.284 }' 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.284 12:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.850 [2024-11-06 12:44:21.385850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.850 [2024-11-06 12:44:21.493533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.850 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.108 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.108 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.108 "name": "raid_bdev1", 00:13:33.108 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:33.108 "strip_size_kb": 0, 00:13:33.108 "state": "online", 00:13:33.108 "raid_level": "raid1", 00:13:33.108 "superblock": false, 00:13:33.108 "num_base_bdevs": 2, 00:13:33.108 "num_base_bdevs_discovered": 1, 00:13:33.108 "num_base_bdevs_operational": 1, 00:13:33.108 "base_bdevs_list": [ 00:13:33.108 { 00:13:33.108 "name": null, 00:13:33.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.108 "is_configured": false, 00:13:33.108 "data_offset": 0, 00:13:33.108 "data_size": 65536 00:13:33.108 }, 00:13:33.108 { 00:13:33.108 "name": "BaseBdev2", 00:13:33.108 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:33.108 "is_configured": true, 00:13:33.108 "data_offset": 0, 00:13:33.108 "data_size": 65536 00:13:33.108 } 00:13:33.108 ] 00:13:33.108 }' 00:13:33.108 12:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.108 12:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.108 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.108 Zero copy mechanism will not be used. 00:13:33.108 Running I/O for 60 seconds... 00:13:33.108 [2024-11-06 12:44:21.617504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:33.674 12:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.674 12:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.674 12:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.674 [2024-11-06 12:44:22.042173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.674 12:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.674 12:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:33.674 [2024-11-06 12:44:22.096885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:33.674 [2024-11-06 12:44:22.099385] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.674 [2024-11-06 12:44:22.225851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:33.674 [2024-11-06 12:44:22.226530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:33.931 [2024-11-06 12:44:22.475665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.188 188.00 IOPS, 564.00 MiB/s [2024-11-06T12:44:22.845Z] [2024-11-06 12:44:22.742077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:34.498 [2024-11-06 12:44:22.953774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.498 "name": "raid_bdev1", 00:13:34.498 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:34.498 "strip_size_kb": 0, 00:13:34.498 "state": "online", 00:13:34.498 "raid_level": "raid1", 00:13:34.498 "superblock": false, 00:13:34.498 "num_base_bdevs": 2, 00:13:34.498 "num_base_bdevs_discovered": 2, 00:13:34.498 "num_base_bdevs_operational": 2, 00:13:34.498 "process": { 00:13:34.498 "type": "rebuild", 00:13:34.498 "target": "spare", 00:13:34.498 "progress": { 00:13:34.498 "blocks": 10240, 00:13:34.498 "percent": 15 00:13:34.498 } 00:13:34.498 }, 00:13:34.498 "base_bdevs_list": [ 00:13:34.498 { 00:13:34.498 "name": "spare", 00:13:34.498 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:34.498 "is_configured": true, 00:13:34.498 "data_offset": 0, 00:13:34.498 "data_size": 65536 00:13:34.498 }, 00:13:34.498 { 00:13:34.498 "name": "BaseBdev2", 00:13:34.498 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:34.498 "is_configured": true, 00:13:34.498 "data_offset": 0, 00:13:34.498 "data_size": 65536 00:13:34.498 } 00:13:34.498 ] 00:13:34.498 }' 00:13:34.498 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.756 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.756 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.756 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.756 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.756 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.756 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.756 [2024-11-06 12:44:23.257846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.756 [2024-11-06 12:44:23.310931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:34.756 [2024-11-06 12:44:23.337963] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.756 [2024-11-06 12:44:23.349915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.756 [2024-11-06 12:44:23.350015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.756 [2024-11-06 12:44:23.350044] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.756 [2024-11-06 12:44:23.406518] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.015 "name": "raid_bdev1", 00:13:35.015 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:35.015 "strip_size_kb": 0, 00:13:35.015 "state": "online", 00:13:35.015 "raid_level": "raid1", 00:13:35.015 "superblock": false, 00:13:35.015 "num_base_bdevs": 2, 00:13:35.015 "num_base_bdevs_discovered": 1, 00:13:35.015 "num_base_bdevs_operational": 1, 00:13:35.015 "base_bdevs_list": [ 00:13:35.015 { 00:13:35.015 "name": null, 00:13:35.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.015 "is_configured": false, 00:13:35.015 "data_offset": 0, 00:13:35.015 "data_size": 65536 00:13:35.015 }, 00:13:35.015 { 00:13:35.015 "name": "BaseBdev2", 00:13:35.015 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:35.015 "is_configured": true, 00:13:35.015 "data_offset": 0, 00:13:35.015 "data_size": 65536 00:13:35.015 } 00:13:35.015 ] 00:13:35.015 }' 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.015 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.583 143.50 IOPS, 430.50 MiB/s [2024-11-06T12:44:24.240Z] 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.583 12:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.583 "name": "raid_bdev1", 00:13:35.583 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:35.583 "strip_size_kb": 0, 00:13:35.583 "state": "online", 00:13:35.583 "raid_level": "raid1", 00:13:35.583 "superblock": false, 00:13:35.583 "num_base_bdevs": 2, 00:13:35.583 "num_base_bdevs_discovered": 1, 00:13:35.583 "num_base_bdevs_operational": 1, 00:13:35.583 "base_bdevs_list": [ 00:13:35.583 { 00:13:35.583 "name": null, 00:13:35.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.583 "is_configured": false, 00:13:35.583 "data_offset": 0, 00:13:35.583 "data_size": 65536 00:13:35.583 }, 00:13:35.583 { 00:13:35.583 "name": "BaseBdev2", 00:13:35.583 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:35.583 "is_configured": true, 00:13:35.583 "data_offset": 0, 00:13:35.583 "data_size": 65536 00:13:35.583 } 00:13:35.583 ] 00:13:35.583 }' 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.583 [2024-11-06 12:44:24.113125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.583 12:44:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.583 [2024-11-06 12:44:24.191037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:35.583 [2024-11-06 12:44:24.193599] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.842 [2024-11-06 12:44:24.306034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.842 [2024-11-06 12:44:24.306726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.100 [2024-11-06 12:44:24.510756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.100 [2024-11-06 12:44:24.511159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.358 148.33 IOPS, 445.00 MiB/s [2024-11-06T12:44:25.015Z] [2024-11-06 12:44:24.856748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.358 [2024-11-06 12:44:24.857469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:36.617 [2024-11-06 12:44:25.078856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.617 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.617 "name": "raid_bdev1", 00:13:36.617 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:36.617 "strip_size_kb": 0, 00:13:36.617 "state": "online", 00:13:36.617 "raid_level": "raid1", 00:13:36.617 "superblock": false, 00:13:36.617 "num_base_bdevs": 2, 00:13:36.617 "num_base_bdevs_discovered": 2, 00:13:36.617 "num_base_bdevs_operational": 2, 00:13:36.617 "process": { 00:13:36.617 "type": "rebuild", 00:13:36.617 "target": "spare", 00:13:36.617 "progress": { 00:13:36.617 "blocks": 10240, 00:13:36.617 "percent": 15 00:13:36.617 } 00:13:36.617 }, 00:13:36.617 "base_bdevs_list": [ 00:13:36.617 { 00:13:36.617 "name": "spare", 00:13:36.617 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:36.617 "is_configured": true, 00:13:36.617 "data_offset": 0, 00:13:36.617 "data_size": 65536 00:13:36.618 }, 00:13:36.618 { 00:13:36.618 "name": "BaseBdev2", 00:13:36.618 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:36.618 "is_configured": true, 00:13:36.618 "data_offset": 0, 00:13:36.618 "data_size": 65536 00:13:36.618 } 00:13:36.618 ] 00:13:36.618 }' 00:13:36.618 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.877 "name": "raid_bdev1", 00:13:36.877 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:36.877 "strip_size_kb": 0, 00:13:36.877 "state": "online", 00:13:36.877 "raid_level": "raid1", 00:13:36.877 "superblock": false, 00:13:36.877 "num_base_bdevs": 2, 00:13:36.877 "num_base_bdevs_discovered": 2, 00:13:36.877 "num_base_bdevs_operational": 2, 00:13:36.877 "process": { 00:13:36.877 "type": "rebuild", 00:13:36.877 "target": "spare", 00:13:36.877 "progress": { 00:13:36.877 "blocks": 14336, 00:13:36.877 "percent": 21 00:13:36.877 } 00:13:36.877 }, 00:13:36.877 "base_bdevs_list": [ 00:13:36.877 { 00:13:36.877 "name": "spare", 00:13:36.877 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:36.877 "is_configured": true, 00:13:36.877 "data_offset": 0, 00:13:36.877 "data_size": 65536 00:13:36.877 }, 00:13:36.877 { 00:13:36.877 "name": "BaseBdev2", 00:13:36.877 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:36.877 "is_configured": true, 00:13:36.877 "data_offset": 0, 00:13:36.877 "data_size": 65536 00:13:36.877 } 00:13:36.877 ] 00:13:36.877 }' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.877 [2024-11-06 12:44:25.427616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.877 12:44:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.136 134.75 IOPS, 404.25 MiB/s [2024-11-06T12:44:25.793Z] [2024-11-06 12:44:25.749160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:37.395 [2024-11-06 12:44:25.961514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:37.395 [2024-11-06 12:44:25.961913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:37.653 [2024-11-06 12:44:26.268415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.911 12:44:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.170 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.170 "name": "raid_bdev1", 00:13:38.170 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:38.170 "strip_size_kb": 0, 00:13:38.170 "state": "online", 00:13:38.170 "raid_level": "raid1", 00:13:38.170 "superblock": false, 00:13:38.170 "num_base_bdevs": 2, 00:13:38.170 "num_base_bdevs_discovered": 2, 00:13:38.170 "num_base_bdevs_operational": 2, 00:13:38.170 "process": { 00:13:38.170 "type": "rebuild", 00:13:38.170 "target": "spare", 00:13:38.170 "progress": { 00:13:38.170 "blocks": 30720, 00:13:38.170 "percent": 46 00:13:38.170 } 00:13:38.170 }, 00:13:38.170 "base_bdevs_list": [ 00:13:38.170 { 00:13:38.170 "name": "spare", 00:13:38.170 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:38.170 "is_configured": true, 00:13:38.170 "data_offset": 0, 00:13:38.170 "data_size": 65536 00:13:38.170 }, 00:13:38.170 { 00:13:38.170 "name": "BaseBdev2", 00:13:38.170 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:38.170 "is_configured": true, 00:13:38.170 "data_offset": 0, 00:13:38.170 "data_size": 65536 00:13:38.170 } 00:13:38.170 ] 00:13:38.170 }' 00:13:38.170 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.170 [2024-11-06 12:44:26.624040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:38.170 [2024-11-06 12:44:26.624618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:38.170 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.170 119.40 IOPS, 358.20 MiB/s [2024-11-06T12:44:26.827Z] 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.170 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.170 12:44:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.105 [2024-11-06 12:44:27.413916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:39.105 [2024-11-06 12:44:27.525182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:39.105 106.50 IOPS, 319.50 MiB/s [2024-11-06T12:44:27.762Z] 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.105 "name": "raid_bdev1", 00:13:39.105 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:39.105 "strip_size_kb": 0, 00:13:39.105 "state": "online", 00:13:39.105 "raid_level": "raid1", 00:13:39.105 "superblock": false, 00:13:39.105 "num_base_bdevs": 2, 00:13:39.105 "num_base_bdevs_discovered": 2, 00:13:39.105 "num_base_bdevs_operational": 2, 00:13:39.105 "process": { 00:13:39.105 "type": "rebuild", 00:13:39.105 "target": "spare", 00:13:39.105 "progress": { 00:13:39.105 "blocks": 47104, 00:13:39.105 "percent": 71 00:13:39.105 } 00:13:39.105 }, 00:13:39.105 "base_bdevs_list": [ 00:13:39.105 { 00:13:39.105 "name": "spare", 00:13:39.105 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:39.105 "is_configured": true, 00:13:39.105 "data_offset": 0, 00:13:39.105 "data_size": 65536 00:13:39.105 }, 00:13:39.105 { 00:13:39.105 "name": "BaseBdev2", 00:13:39.105 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:39.105 "is_configured": true, 00:13:39.105 "data_offset": 0, 00:13:39.105 "data_size": 65536 00:13:39.105 } 00:13:39.105 ] 00:13:39.105 }' 00:13:39.105 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.364 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.364 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.364 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.364 12:44:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.621 [2024-11-06 12:44:28.211410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:39.879 [2024-11-06 12:44:28.321964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:40.140 96.71 IOPS, 290.14 MiB/s [2024-11-06T12:44:28.797Z] [2024-11-06 12:44:28.661128] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:40.140 [2024-11-06 12:44:28.768883] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:40.140 [2024-11-06 12:44:28.770996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.399 "name": "raid_bdev1", 00:13:40.399 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:40.399 "strip_size_kb": 0, 00:13:40.399 "state": "online", 00:13:40.399 "raid_level": "raid1", 00:13:40.399 "superblock": false, 00:13:40.399 "num_base_bdevs": 2, 00:13:40.399 "num_base_bdevs_discovered": 2, 00:13:40.399 "num_base_bdevs_operational": 2, 00:13:40.399 "base_bdevs_list": [ 00:13:40.399 { 00:13:40.399 "name": "spare", 00:13:40.399 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:40.399 "is_configured": true, 00:13:40.399 "data_offset": 0, 00:13:40.399 "data_size": 65536 00:13:40.399 }, 00:13:40.399 { 00:13:40.399 "name": "BaseBdev2", 00:13:40.399 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:40.399 "is_configured": true, 00:13:40.399 "data_offset": 0, 00:13:40.399 "data_size": 65536 00:13:40.399 } 00:13:40.399 ] 00:13:40.399 }' 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.399 12:44:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.399 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.399 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.399 "name": "raid_bdev1", 00:13:40.399 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:40.399 "strip_size_kb": 0, 00:13:40.399 "state": "online", 00:13:40.399 "raid_level": "raid1", 00:13:40.399 "superblock": false, 00:13:40.399 "num_base_bdevs": 2, 00:13:40.399 "num_base_bdevs_discovered": 2, 00:13:40.399 "num_base_bdevs_operational": 2, 00:13:40.399 "base_bdevs_list": [ 00:13:40.399 { 00:13:40.399 "name": "spare", 00:13:40.399 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:40.399 "is_configured": true, 00:13:40.399 "data_offset": 0, 00:13:40.399 "data_size": 65536 00:13:40.399 }, 00:13:40.399 { 00:13:40.399 "name": "BaseBdev2", 00:13:40.399 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:40.399 "is_configured": true, 00:13:40.399 "data_offset": 0, 00:13:40.399 "data_size": 65536 00:13:40.399 } 00:13:40.399 ] 00:13:40.399 }' 00:13:40.399 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.657 "name": "raid_bdev1", 00:13:40.657 "uuid": "4b430653-0eac-4002-bd63-4755213fe47a", 00:13:40.657 "strip_size_kb": 0, 00:13:40.657 "state": "online", 00:13:40.657 "raid_level": "raid1", 00:13:40.657 "superblock": false, 00:13:40.657 "num_base_bdevs": 2, 00:13:40.657 "num_base_bdevs_discovered": 2, 00:13:40.657 "num_base_bdevs_operational": 2, 00:13:40.657 "base_bdevs_list": [ 00:13:40.657 { 00:13:40.657 "name": "spare", 00:13:40.657 "uuid": "6d6d55b7-8261-5ef3-9633-1a6f2c5c20c1", 00:13:40.657 "is_configured": true, 00:13:40.657 "data_offset": 0, 00:13:40.657 "data_size": 65536 00:13:40.657 }, 00:13:40.657 { 00:13:40.657 "name": "BaseBdev2", 00:13:40.657 "uuid": "f9b50e3b-3eac-5f26-baf2-2369e67c0881", 00:13:40.657 "is_configured": true, 00:13:40.657 "data_offset": 0, 00:13:40.657 "data_size": 65536 00:13:40.657 } 00:13:40.657 ] 00:13:40.657 }' 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.657 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.223 87.88 IOPS, 263.62 MiB/s [2024-11-06T12:44:29.880Z] 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.223 [2024-11-06 12:44:29.659798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.223 [2024-11-06 12:44:29.659844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.223 00:13:41.223 Latency(us) 00:13:41.223 [2024-11-06T12:44:29.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.223 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:41.223 raid_bdev1 : 8.14 87.10 261.31 0.00 0.00 16325.94 288.58 118203.11 00:13:41.223 [2024-11-06T12:44:29.880Z] =================================================================================================================== 00:13:41.223 [2024-11-06T12:44:29.880Z] Total : 87.10 261.31 0.00 0.00 16325.94 288.58 118203.11 00:13:41.223 [2024-11-06 12:44:29.779556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.223 [2024-11-06 12:44:29.779624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.223 [2024-11-06 12:44:29.779739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.223 [2024-11-06 12:44:29.779756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:41.223 { 00:13:41.223 "results": [ 00:13:41.223 { 00:13:41.223 "job": "raid_bdev1", 00:13:41.223 "core_mask": "0x1", 00:13:41.223 "workload": "randrw", 00:13:41.223 "percentage": 50, 00:13:41.223 "status": "finished", 00:13:41.223 "queue_depth": 2, 00:13:41.223 "io_size": 3145728, 00:13:41.223 "runtime": 8.139618, 00:13:41.223 "iops": 87.1048248210174, 00:13:41.223 "mibps": 261.3144744630522, 00:13:41.223 "io_failed": 0, 00:13:41.223 "io_timeout": 0, 00:13:41.223 "avg_latency_us": 16325.935694319787, 00:13:41.223 "min_latency_us": 288.58181818181816, 00:13:41.223 "max_latency_us": 118203.11272727273 00:13:41.223 } 00:13:41.223 ], 00:13:41.223 "core_count": 1 00:13:41.223 } 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.223 12:44:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:41.789 /dev/nbd0 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.789 1+0 records in 00:13:41.789 1+0 records out 00:13:41.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281236 s, 14.6 MB/s 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.789 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:42.047 /dev/nbd1 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.047 1+0 records in 00:13:42.047 1+0 records out 00:13:42.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407416 s, 10.1 MB/s 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.047 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.048 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.306 12:44:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76718 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76718 ']' 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76718 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76718 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:42.873 killing process with pid 76718 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76718' 00:13:42.873 Received shutdown signal, test time was about 9.648321 seconds 00:13:42.873 00:13:42.873 Latency(us) 00:13:42.873 [2024-11-06T12:44:31.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.873 [2024-11-06T12:44:31.530Z] =================================================================================================================== 00:13:42.873 [2024-11-06T12:44:31.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76718 00:13:42.873 [2024-11-06 12:44:31.268393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.873 12:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76718 00:13:42.873 [2024-11-06 12:44:31.476170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:44.250 00:13:44.250 real 0m12.969s 00:13:44.250 user 0m16.978s 00:13:44.250 sys 0m1.478s 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.250 ************************************ 00:13:44.250 END TEST raid_rebuild_test_io 00:13:44.250 ************************************ 00:13:44.250 12:44:32 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:44.250 12:44:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:44.250 12:44:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.250 12:44:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.250 ************************************ 00:13:44.250 START TEST raid_rebuild_test_sb_io 00:13:44.250 ************************************ 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77100 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77100 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77100 ']' 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:44.250 12:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.250 [2024-11-06 12:44:32.747976] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:13:44.250 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:44.250 Zero copy mechanism will not be used. 00:13:44.250 [2024-11-06 12:44:32.748161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77100 ] 00:13:44.509 [2024-11-06 12:44:32.933549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.509 [2024-11-06 12:44:33.062083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.767 [2024-11-06 12:44:33.267821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.767 [2024-11-06 12:44:33.267886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 BaseBdev1_malloc 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 [2024-11-06 12:44:33.822224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:45.335 [2024-11-06 12:44:33.822319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.335 [2024-11-06 12:44:33.822351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:45.335 [2024-11-06 12:44:33.822370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.335 [2024-11-06 12:44:33.825098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.335 [2024-11-06 12:44:33.825154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.335 BaseBdev1 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 BaseBdev2_malloc 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 [2024-11-06 12:44:33.874041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:45.335 [2024-11-06 12:44:33.874135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.335 [2024-11-06 12:44:33.874163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:45.335 [2024-11-06 12:44:33.874184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.335 [2024-11-06 12:44:33.876960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.335 [2024-11-06 12:44:33.877027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.335 BaseBdev2 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 spare_malloc 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 spare_delay 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 [2024-11-06 12:44:33.947033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.335 [2024-11-06 12:44:33.947123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.335 [2024-11-06 12:44:33.947157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:45.335 [2024-11-06 12:44:33.947176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.335 [2024-11-06 12:44:33.949949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.335 [2024-11-06 12:44:33.950004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.335 spare 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 [2024-11-06 12:44:33.955105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.335 [2024-11-06 12:44:33.957499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.335 [2024-11-06 12:44:33.957759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:45.335 [2024-11-06 12:44:33.957783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.335 [2024-11-06 12:44:33.958105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:45.335 [2024-11-06 12:44:33.958357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:45.335 [2024-11-06 12:44:33.958384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:45.335 [2024-11-06 12:44:33.958564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.335 12:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.594 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.594 "name": "raid_bdev1", 00:13:45.594 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:45.594 "strip_size_kb": 0, 00:13:45.594 "state": "online", 00:13:45.594 "raid_level": "raid1", 00:13:45.594 "superblock": true, 00:13:45.594 "num_base_bdevs": 2, 00:13:45.594 "num_base_bdevs_discovered": 2, 00:13:45.594 "num_base_bdevs_operational": 2, 00:13:45.594 "base_bdevs_list": [ 00:13:45.594 { 00:13:45.594 "name": "BaseBdev1", 00:13:45.594 "uuid": "f6a668a7-0df6-56d3-adab-0f4f097732a5", 00:13:45.594 "is_configured": true, 00:13:45.594 "data_offset": 2048, 00:13:45.594 "data_size": 63488 00:13:45.594 }, 00:13:45.594 { 00:13:45.594 "name": "BaseBdev2", 00:13:45.594 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:45.594 "is_configured": true, 00:13:45.594 "data_offset": 2048, 00:13:45.594 "data_size": 63488 00:13:45.594 } 00:13:45.594 ] 00:13:45.594 }' 00:13:45.594 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.594 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.853 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.853 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.853 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.853 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:45.853 [2024-11-06 12:44:34.459632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.853 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.853 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.111 [2024-11-06 12:44:34.567248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.111 "name": "raid_bdev1", 00:13:46.111 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:46.111 "strip_size_kb": 0, 00:13:46.111 "state": "online", 00:13:46.111 "raid_level": "raid1", 00:13:46.111 "superblock": true, 00:13:46.111 "num_base_bdevs": 2, 00:13:46.111 "num_base_bdevs_discovered": 1, 00:13:46.111 "num_base_bdevs_operational": 1, 00:13:46.111 "base_bdevs_list": [ 00:13:46.111 { 00:13:46.111 "name": null, 00:13:46.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.111 "is_configured": false, 00:13:46.111 "data_offset": 0, 00:13:46.111 "data_size": 63488 00:13:46.111 }, 00:13:46.111 { 00:13:46.111 "name": "BaseBdev2", 00:13:46.111 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:46.111 "is_configured": true, 00:13:46.111 "data_offset": 2048, 00:13:46.111 "data_size": 63488 00:13:46.111 } 00:13:46.111 ] 00:13:46.111 }' 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.111 12:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.111 [2024-11-06 12:44:34.699334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:46.111 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.111 Zero copy mechanism will not be used. 00:13:46.111 Running I/O for 60 seconds... 00:13:46.677 12:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.678 12:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.678 12:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.678 [2024-11-06 12:44:35.090938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.678 12:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.678 12:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:46.678 [2024-11-06 12:44:35.145754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:46.678 [2024-11-06 12:44:35.148586] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.678 [2024-11-06 12:44:35.259706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:46.678 [2024-11-06 12:44:35.260665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:46.935 [2024-11-06 12:44:35.481256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:46.935 [2024-11-06 12:44:35.481873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:47.193 130.00 IOPS, 390.00 MiB/s [2024-11-06T12:44:35.850Z] [2024-11-06 12:44:35.830540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.813 "name": "raid_bdev1", 00:13:47.813 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:47.813 "strip_size_kb": 0, 00:13:47.813 "state": "online", 00:13:47.813 "raid_level": "raid1", 00:13:47.813 "superblock": true, 00:13:47.813 "num_base_bdevs": 2, 00:13:47.813 "num_base_bdevs_discovered": 2, 00:13:47.813 "num_base_bdevs_operational": 2, 00:13:47.813 "process": { 00:13:47.813 "type": "rebuild", 00:13:47.813 "target": "spare", 00:13:47.813 "progress": { 00:13:47.813 "blocks": 12288, 00:13:47.813 "percent": 19 00:13:47.813 } 00:13:47.813 }, 00:13:47.813 "base_bdevs_list": [ 00:13:47.813 { 00:13:47.813 "name": "spare", 00:13:47.813 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:47.813 "is_configured": true, 00:13:47.813 "data_offset": 2048, 00:13:47.813 "data_size": 63488 00:13:47.813 }, 00:13:47.813 { 00:13:47.813 "name": "BaseBdev2", 00:13:47.813 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:47.813 "is_configured": true, 00:13:47.813 "data_offset": 2048, 00:13:47.813 "data_size": 63488 00:13:47.813 } 00:13:47.813 ] 00:13:47.813 }' 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.813 [2024-11-06 12:44:36.245316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.813 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.813 [2024-11-06 12:44:36.294031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.813 [2024-11-06 12:44:36.364643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:48.072 [2024-11-06 12:44:36.467743] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.072 [2024-11-06 12:44:36.479132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.072 [2024-11-06 12:44:36.479256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.072 [2024-11-06 12:44:36.479286] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.072 [2024-11-06 12:44:36.523893] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.072 "name": "raid_bdev1", 00:13:48.072 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:48.072 "strip_size_kb": 0, 00:13:48.072 "state": "online", 00:13:48.072 "raid_level": "raid1", 00:13:48.072 "superblock": true, 00:13:48.072 "num_base_bdevs": 2, 00:13:48.072 "num_base_bdevs_discovered": 1, 00:13:48.072 "num_base_bdevs_operational": 1, 00:13:48.072 "base_bdevs_list": [ 00:13:48.072 { 00:13:48.072 "name": null, 00:13:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.072 "is_configured": false, 00:13:48.072 "data_offset": 0, 00:13:48.072 "data_size": 63488 00:13:48.072 }, 00:13:48.072 { 00:13:48.072 "name": "BaseBdev2", 00:13:48.072 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:48.072 "is_configured": true, 00:13:48.072 "data_offset": 2048, 00:13:48.072 "data_size": 63488 00:13:48.072 } 00:13:48.072 ] 00:13:48.072 }' 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.072 12:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.589 110.50 IOPS, 331.50 MiB/s [2024-11-06T12:44:37.246Z] 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.589 "name": "raid_bdev1", 00:13:48.589 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:48.589 "strip_size_kb": 0, 00:13:48.589 "state": "online", 00:13:48.589 "raid_level": "raid1", 00:13:48.589 "superblock": true, 00:13:48.589 "num_base_bdevs": 2, 00:13:48.589 "num_base_bdevs_discovered": 1, 00:13:48.589 "num_base_bdevs_operational": 1, 00:13:48.589 "base_bdevs_list": [ 00:13:48.589 { 00:13:48.589 "name": null, 00:13:48.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.589 "is_configured": false, 00:13:48.589 "data_offset": 0, 00:13:48.589 "data_size": 63488 00:13:48.589 }, 00:13:48.589 { 00:13:48.589 "name": "BaseBdev2", 00:13:48.589 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:48.589 "is_configured": true, 00:13:48.589 "data_offset": 2048, 00:13:48.589 "data_size": 63488 00:13:48.589 } 00:13:48.589 ] 00:13:48.589 }' 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.589 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.589 [2024-11-06 12:44:37.228503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.847 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.847 12:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:48.847 [2024-11-06 12:44:37.304199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:48.847 [2024-11-06 12:44:37.306871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.847 [2024-11-06 12:44:37.433850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.104 [2024-11-06 12:44:37.663251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.104 [2024-11-06 12:44:37.663948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.362 135.33 IOPS, 406.00 MiB/s [2024-11-06T12:44:38.019Z] [2024-11-06 12:44:38.014959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:49.362 [2024-11-06 12:44:38.015730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:49.623 [2024-11-06 12:44:38.236337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:49.623 [2024-11-06 12:44:38.237058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.884 "name": "raid_bdev1", 00:13:49.884 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:49.884 "strip_size_kb": 0, 00:13:49.884 "state": "online", 00:13:49.884 "raid_level": "raid1", 00:13:49.884 "superblock": true, 00:13:49.884 "num_base_bdevs": 2, 00:13:49.884 "num_base_bdevs_discovered": 2, 00:13:49.884 "num_base_bdevs_operational": 2, 00:13:49.884 "process": { 00:13:49.884 "type": "rebuild", 00:13:49.884 "target": "spare", 00:13:49.884 "progress": { 00:13:49.884 "blocks": 10240, 00:13:49.884 "percent": 16 00:13:49.884 } 00:13:49.884 }, 00:13:49.884 "base_bdevs_list": [ 00:13:49.884 { 00:13:49.884 "name": "spare", 00:13:49.884 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:49.884 "is_configured": true, 00:13:49.884 "data_offset": 2048, 00:13:49.884 "data_size": 63488 00:13:49.884 }, 00:13:49.884 { 00:13:49.884 "name": "BaseBdev2", 00:13:49.884 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:49.884 "is_configured": true, 00:13:49.884 "data_offset": 2048, 00:13:49.884 "data_size": 63488 00:13:49.884 } 00:13:49.884 ] 00:13:49.884 }' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:49.884 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.884 "name": "raid_bdev1", 00:13:49.884 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:49.884 "strip_size_kb": 0, 00:13:49.884 "state": "online", 00:13:49.884 "raid_level": "raid1", 00:13:49.884 "superblock": true, 00:13:49.884 "num_base_bdevs": 2, 00:13:49.884 "num_base_bdevs_discovered": 2, 00:13:49.884 "num_base_bdevs_operational": 2, 00:13:49.884 "process": { 00:13:49.884 "type": "rebuild", 00:13:49.884 "target": "spare", 00:13:49.884 "progress": { 00:13:49.884 "blocks": 10240, 00:13:49.884 "percent": 16 00:13:49.884 } 00:13:49.884 }, 00:13:49.884 "base_bdevs_list": [ 00:13:49.884 { 00:13:49.884 "name": "spare", 00:13:49.884 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:49.884 "is_configured": true, 00:13:49.884 "data_offset": 2048, 00:13:49.884 "data_size": 63488 00:13:49.884 }, 00:13:49.884 { 00:13:49.884 "name": "BaseBdev2", 00:13:49.884 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:49.884 "is_configured": true, 00:13:49.884 "data_offset": 2048, 00:13:49.884 "data_size": 63488 00:13:49.884 } 00:13:49.884 ] 00:13:49.884 }' 00:13:49.884 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.142 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.142 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.142 [2024-11-06 12:44:38.581615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:50.142 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.142 12:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.142 [2024-11-06 12:44:38.683882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.399 117.25 IOPS, 351.75 MiB/s [2024-11-06T12:44:39.056Z] [2024-11-06 12:44:38.944375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:50.657 [2024-11-06 12:44:39.166651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:50.924 [2024-11-06 12:44:39.390924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:50.924 [2024-11-06 12:44:39.522095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.182 "name": "raid_bdev1", 00:13:51.182 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:51.182 "strip_size_kb": 0, 00:13:51.182 "state": "online", 00:13:51.182 "raid_level": "raid1", 00:13:51.182 "superblock": true, 00:13:51.182 "num_base_bdevs": 2, 00:13:51.182 "num_base_bdevs_discovered": 2, 00:13:51.182 "num_base_bdevs_operational": 2, 00:13:51.182 "process": { 00:13:51.182 "type": "rebuild", 00:13:51.182 "target": "spare", 00:13:51.182 "progress": { 00:13:51.182 "blocks": 28672, 00:13:51.182 "percent": 45 00:13:51.182 } 00:13:51.182 }, 00:13:51.182 "base_bdevs_list": [ 00:13:51.182 { 00:13:51.182 "name": "spare", 00:13:51.182 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:51.182 "is_configured": true, 00:13:51.182 "data_offset": 2048, 00:13:51.182 "data_size": 63488 00:13:51.182 }, 00:13:51.182 { 00:13:51.182 "name": "BaseBdev2", 00:13:51.182 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:51.182 "is_configured": true, 00:13:51.182 "data_offset": 2048, 00:13:51.182 "data_size": 63488 00:13:51.182 } 00:13:51.182 ] 00:13:51.182 }' 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.182 108.60 IOPS, 325.80 MiB/s [2024-11-06T12:44:39.839Z] 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.182 12:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.440 [2024-11-06 12:44:39.866792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:52.007 [2024-11-06 12:44:40.569832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:52.265 97.17 IOPS, 291.50 MiB/s [2024-11-06T12:44:40.922Z] 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.265 [2024-11-06 12:44:40.797002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.265 "name": "raid_bdev1", 00:13:52.265 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:52.265 "strip_size_kb": 0, 00:13:52.265 "state": "online", 00:13:52.265 "raid_level": "raid1", 00:13:52.265 "superblock": true, 00:13:52.265 "num_base_bdevs": 2, 00:13:52.265 "num_base_bdevs_discovered": 2, 00:13:52.265 "num_base_bdevs_operational": 2, 00:13:52.265 "process": { 00:13:52.265 "type": "rebuild", 00:13:52.265 "target": "spare", 00:13:52.265 "progress": { 00:13:52.265 "blocks": 49152, 00:13:52.265 "percent": 77 00:13:52.265 } 00:13:52.265 }, 00:13:52.265 "base_bdevs_list": [ 00:13:52.265 { 00:13:52.265 "name": "spare", 00:13:52.265 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:52.265 "is_configured": true, 00:13:52.265 "data_offset": 2048, 00:13:52.265 "data_size": 63488 00:13:52.265 }, 00:13:52.265 { 00:13:52.265 "name": "BaseBdev2", 00:13:52.265 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:52.265 "is_configured": true, 00:13:52.265 "data_offset": 2048, 00:13:52.265 "data_size": 63488 00:13:52.265 } 00:13:52.265 ] 00:13:52.265 }' 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.265 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.523 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.523 12:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.523 [2024-11-06 12:44:41.020123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:53.089 [2024-11-06 12:44:41.474673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:53.089 [2024-11-06 12:44:41.703739] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:53.348 88.14 IOPS, 264.43 MiB/s [2024-11-06T12:44:42.005Z] [2024-11-06 12:44:41.811427] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:53.348 [2024-11-06 12:44:41.814569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.348 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.349 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.349 "name": "raid_bdev1", 00:13:53.349 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:53.349 "strip_size_kb": 0, 00:13:53.349 "state": "online", 00:13:53.349 "raid_level": "raid1", 00:13:53.349 "superblock": true, 00:13:53.349 "num_base_bdevs": 2, 00:13:53.349 "num_base_bdevs_discovered": 2, 00:13:53.349 "num_base_bdevs_operational": 2, 00:13:53.349 "base_bdevs_list": [ 00:13:53.349 { 00:13:53.349 "name": "spare", 00:13:53.349 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:53.349 "is_configured": true, 00:13:53.349 "data_offset": 2048, 00:13:53.349 "data_size": 63488 00:13:53.349 }, 00:13:53.349 { 00:13:53.349 "name": "BaseBdev2", 00:13:53.349 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:53.349 "is_configured": true, 00:13:53.349 "data_offset": 2048, 00:13:53.349 "data_size": 63488 00:13:53.349 } 00:13:53.349 ] 00:13:53.349 }' 00:13:53.349 12:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.607 "name": "raid_bdev1", 00:13:53.607 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:53.607 "strip_size_kb": 0, 00:13:53.607 "state": "online", 00:13:53.607 "raid_level": "raid1", 00:13:53.607 "superblock": true, 00:13:53.607 "num_base_bdevs": 2, 00:13:53.607 "num_base_bdevs_discovered": 2, 00:13:53.607 "num_base_bdevs_operational": 2, 00:13:53.607 "base_bdevs_list": [ 00:13:53.607 { 00:13:53.607 "name": "spare", 00:13:53.607 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:53.607 "is_configured": true, 00:13:53.607 "data_offset": 2048, 00:13:53.607 "data_size": 63488 00:13:53.607 }, 00:13:53.607 { 00:13:53.607 "name": "BaseBdev2", 00:13:53.607 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:53.607 "is_configured": true, 00:13:53.607 "data_offset": 2048, 00:13:53.607 "data_size": 63488 00:13:53.607 } 00:13:53.607 ] 00:13:53.607 }' 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.607 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.865 "name": "raid_bdev1", 00:13:53.865 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:53.865 "strip_size_kb": 0, 00:13:53.865 "state": "online", 00:13:53.865 "raid_level": "raid1", 00:13:53.865 "superblock": true, 00:13:53.865 "num_base_bdevs": 2, 00:13:53.865 "num_base_bdevs_discovered": 2, 00:13:53.865 "num_base_bdevs_operational": 2, 00:13:53.865 "base_bdevs_list": [ 00:13:53.865 { 00:13:53.865 "name": "spare", 00:13:53.865 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:53.865 "is_configured": true, 00:13:53.865 "data_offset": 2048, 00:13:53.865 "data_size": 63488 00:13:53.865 }, 00:13:53.865 { 00:13:53.865 "name": "BaseBdev2", 00:13:53.865 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:53.865 "is_configured": true, 00:13:53.865 "data_offset": 2048, 00:13:53.865 "data_size": 63488 00:13:53.865 } 00:13:53.865 ] 00:13:53.865 }' 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.865 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.382 80.88 IOPS, 242.62 MiB/s [2024-11-06T12:44:43.039Z] 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.383 [2024-11-06 12:44:42.834614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.383 [2024-11-06 12:44:42.834872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.383 00:13:54.383 Latency(us) 00:13:54.383 [2024-11-06T12:44:43.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.383 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:54.383 raid_bdev1 : 8.21 79.63 238.90 0.00 0.00 17129.11 299.75 112483.61 00:13:54.383 [2024-11-06T12:44:43.040Z] =================================================================================================================== 00:13:54.383 [2024-11-06T12:44:43.040Z] Total : 79.63 238.90 0.00 0.00 17129.11 299.75 112483.61 00:13:54.383 [2024-11-06 12:44:42.934630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.383 [2024-11-06 12:44:42.934889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.383 [2024-11-06 12:44:42.935118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.383 [2024-11-06 12:44:42.935298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:54.383 { 00:13:54.383 "results": [ 00:13:54.383 { 00:13:54.383 "job": "raid_bdev1", 00:13:54.383 "core_mask": "0x1", 00:13:54.383 "workload": "randrw", 00:13:54.383 "percentage": 50, 00:13:54.383 "status": "finished", 00:13:54.383 "queue_depth": 2, 00:13:54.383 "io_size": 3145728, 00:13:54.383 "runtime": 8.212554, 00:13:54.383 "iops": 79.63417957434436, 00:13:54.383 "mibps": 238.90253872303308, 00:13:54.383 "io_failed": 0, 00:13:54.383 "io_timeout": 0, 00:13:54.383 "avg_latency_us": 17129.10874617737, 00:13:54.383 "min_latency_us": 299.75272727272727, 00:13:54.383 "max_latency_us": 112483.60727272727 00:13:54.383 } 00:13:54.383 ], 00:13:54.383 "core_count": 1 00:13:54.383 } 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.383 12:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.383 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:54.948 /dev/nbd0 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.948 1+0 records in 00:13:54.948 1+0 records out 00:13:54.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414816 s, 9.9 MB/s 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.948 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:55.206 /dev/nbd1 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.206 1+0 records in 00:13:55.206 1+0 records out 00:13:55.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319886 s, 12.8 MB/s 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.206 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.465 12:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.723 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.981 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.981 [2024-11-06 12:44:44.532608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.981 [2024-11-06 12:44:44.532687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.981 [2024-11-06 12:44:44.532718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:55.981 [2024-11-06 12:44:44.532736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.981 [2024-11-06 12:44:44.535696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.982 [2024-11-06 12:44:44.535753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.982 [2024-11-06 12:44:44.535869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:55.982 [2024-11-06 12:44:44.535941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.982 [2024-11-06 12:44:44.536122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.982 spare 00:13:55.982 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.982 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:55.982 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.982 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.982 [2024-11-06 12:44:44.636306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:55.982 [2024-11-06 12:44:44.636642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.240 [2024-11-06 12:44:44.637175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:56.240 [2024-11-06 12:44:44.638074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:56.240 [2024-11-06 12:44:44.638227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:56.240 [2024-11-06 12:44:44.638652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.240 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.240 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:56.240 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.240 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.241 "name": "raid_bdev1", 00:13:56.241 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:56.241 "strip_size_kb": 0, 00:13:56.241 "state": "online", 00:13:56.241 "raid_level": "raid1", 00:13:56.241 "superblock": true, 00:13:56.241 "num_base_bdevs": 2, 00:13:56.241 "num_base_bdevs_discovered": 2, 00:13:56.241 "num_base_bdevs_operational": 2, 00:13:56.241 "base_bdevs_list": [ 00:13:56.241 { 00:13:56.241 "name": "spare", 00:13:56.241 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:56.241 "is_configured": true, 00:13:56.241 "data_offset": 2048, 00:13:56.241 "data_size": 63488 00:13:56.241 }, 00:13:56.241 { 00:13:56.241 "name": "BaseBdev2", 00:13:56.241 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:56.241 "is_configured": true, 00:13:56.241 "data_offset": 2048, 00:13:56.241 "data_size": 63488 00:13:56.241 } 00:13:56.241 ] 00:13:56.241 }' 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.241 12:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.806 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.807 "name": "raid_bdev1", 00:13:56.807 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:56.807 "strip_size_kb": 0, 00:13:56.807 "state": "online", 00:13:56.807 "raid_level": "raid1", 00:13:56.807 "superblock": true, 00:13:56.807 "num_base_bdevs": 2, 00:13:56.807 "num_base_bdevs_discovered": 2, 00:13:56.807 "num_base_bdevs_operational": 2, 00:13:56.807 "base_bdevs_list": [ 00:13:56.807 { 00:13:56.807 "name": "spare", 00:13:56.807 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:56.807 "is_configured": true, 00:13:56.807 "data_offset": 2048, 00:13:56.807 "data_size": 63488 00:13:56.807 }, 00:13:56.807 { 00:13:56.807 "name": "BaseBdev2", 00:13:56.807 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:56.807 "is_configured": true, 00:13:56.807 "data_offset": 2048, 00:13:56.807 "data_size": 63488 00:13:56.807 } 00:13:56.807 ] 00:13:56.807 }' 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.807 [2024-11-06 12:44:45.362939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.807 "name": "raid_bdev1", 00:13:56.807 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:56.807 "strip_size_kb": 0, 00:13:56.807 "state": "online", 00:13:56.807 "raid_level": "raid1", 00:13:56.807 "superblock": true, 00:13:56.807 "num_base_bdevs": 2, 00:13:56.807 "num_base_bdevs_discovered": 1, 00:13:56.807 "num_base_bdevs_operational": 1, 00:13:56.807 "base_bdevs_list": [ 00:13:56.807 { 00:13:56.807 "name": null, 00:13:56.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.807 "is_configured": false, 00:13:56.807 "data_offset": 0, 00:13:56.807 "data_size": 63488 00:13:56.807 }, 00:13:56.807 { 00:13:56.807 "name": "BaseBdev2", 00:13:56.807 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:56.807 "is_configured": true, 00:13:56.807 "data_offset": 2048, 00:13:56.807 "data_size": 63488 00:13:56.807 } 00:13:56.807 ] 00:13:56.807 }' 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.807 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.373 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.373 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.373 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.373 [2024-11-06 12:44:45.871168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.373 [2024-11-06 12:44:45.871458] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:57.373 [2024-11-06 12:44:45.871588] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:57.373 [2024-11-06 12:44:45.871655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.373 [2024-11-06 12:44:45.888080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:57.373 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.373 12:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:57.373 [2024-11-06 12:44:45.890672] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.307 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.307 "name": "raid_bdev1", 00:13:58.307 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:58.307 "strip_size_kb": 0, 00:13:58.307 "state": "online", 00:13:58.307 "raid_level": "raid1", 00:13:58.307 "superblock": true, 00:13:58.307 "num_base_bdevs": 2, 00:13:58.307 "num_base_bdevs_discovered": 2, 00:13:58.307 "num_base_bdevs_operational": 2, 00:13:58.307 "process": { 00:13:58.307 "type": "rebuild", 00:13:58.307 "target": "spare", 00:13:58.307 "progress": { 00:13:58.307 "blocks": 20480, 00:13:58.307 "percent": 32 00:13:58.307 } 00:13:58.308 }, 00:13:58.308 "base_bdevs_list": [ 00:13:58.308 { 00:13:58.308 "name": "spare", 00:13:58.308 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:13:58.308 "is_configured": true, 00:13:58.308 "data_offset": 2048, 00:13:58.308 "data_size": 63488 00:13:58.308 }, 00:13:58.308 { 00:13:58.308 "name": "BaseBdev2", 00:13:58.308 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:58.308 "is_configured": true, 00:13:58.308 "data_offset": 2048, 00:13:58.308 "data_size": 63488 00:13:58.308 } 00:13:58.308 ] 00:13:58.308 }' 00:13:58.308 12:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.566 [2024-11-06 12:44:47.064157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.566 [2024-11-06 12:44:47.099828] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.566 [2024-11-06 12:44:47.100121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.566 [2024-11-06 12:44:47.100157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.566 [2024-11-06 12:44:47.100171] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.566 "name": "raid_bdev1", 00:13:58.566 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:13:58.566 "strip_size_kb": 0, 00:13:58.566 "state": "online", 00:13:58.566 "raid_level": "raid1", 00:13:58.566 "superblock": true, 00:13:58.566 "num_base_bdevs": 2, 00:13:58.566 "num_base_bdevs_discovered": 1, 00:13:58.566 "num_base_bdevs_operational": 1, 00:13:58.566 "base_bdevs_list": [ 00:13:58.566 { 00:13:58.566 "name": null, 00:13:58.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.566 "is_configured": false, 00:13:58.566 "data_offset": 0, 00:13:58.566 "data_size": 63488 00:13:58.566 }, 00:13:58.566 { 00:13:58.566 "name": "BaseBdev2", 00:13:58.566 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:13:58.566 "is_configured": true, 00:13:58.566 "data_offset": 2048, 00:13:58.566 "data_size": 63488 00:13:58.566 } 00:13:58.566 ] 00:13:58.566 }' 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.566 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.132 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.132 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.132 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.132 [2024-11-06 12:44:47.651365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.132 [2024-11-06 12:44:47.651521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.132 [2024-11-06 12:44:47.651601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:59.132 [2024-11-06 12:44:47.651655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.132 [2024-11-06 12:44:47.652412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.132 [2024-11-06 12:44:47.652577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.132 [2024-11-06 12:44:47.652851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:59.132 [2024-11-06 12:44:47.652993] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:59.132 [2024-11-06 12:44:47.653028] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.132 [2024-11-06 12:44:47.653090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.132 [2024-11-06 12:44:47.670336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:59.132 spare 00:13:59.132 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.132 12:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:59.132 [2024-11-06 12:44:47.673257] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.066 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.325 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.325 "name": "raid_bdev1", 00:14:00.325 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:00.325 "strip_size_kb": 0, 00:14:00.326 "state": "online", 00:14:00.326 "raid_level": "raid1", 00:14:00.326 "superblock": true, 00:14:00.326 "num_base_bdevs": 2, 00:14:00.326 "num_base_bdevs_discovered": 2, 00:14:00.326 "num_base_bdevs_operational": 2, 00:14:00.326 "process": { 00:14:00.326 "type": "rebuild", 00:14:00.326 "target": "spare", 00:14:00.326 "progress": { 00:14:00.326 "blocks": 20480, 00:14:00.326 "percent": 32 00:14:00.326 } 00:14:00.326 }, 00:14:00.326 "base_bdevs_list": [ 00:14:00.326 { 00:14:00.326 "name": "spare", 00:14:00.326 "uuid": "ac3acfae-bea9-588a-a226-84c5f1850d3d", 00:14:00.326 "is_configured": true, 00:14:00.326 "data_offset": 2048, 00:14:00.326 "data_size": 63488 00:14:00.326 }, 00:14:00.326 { 00:14:00.326 "name": "BaseBdev2", 00:14:00.326 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:00.326 "is_configured": true, 00:14:00.326 "data_offset": 2048, 00:14:00.326 "data_size": 63488 00:14:00.326 } 00:14:00.326 ] 00:14:00.326 }' 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.326 [2024-11-06 12:44:48.843010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.326 [2024-11-06 12:44:48.884630] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.326 [2024-11-06 12:44:48.884937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.326 [2024-11-06 12:44:48.884974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.326 [2024-11-06 12:44:48.884997] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.326 "name": "raid_bdev1", 00:14:00.326 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:00.326 "strip_size_kb": 0, 00:14:00.326 "state": "online", 00:14:00.326 "raid_level": "raid1", 00:14:00.326 "superblock": true, 00:14:00.326 "num_base_bdevs": 2, 00:14:00.326 "num_base_bdevs_discovered": 1, 00:14:00.326 "num_base_bdevs_operational": 1, 00:14:00.326 "base_bdevs_list": [ 00:14:00.326 { 00:14:00.326 "name": null, 00:14:00.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.326 "is_configured": false, 00:14:00.326 "data_offset": 0, 00:14:00.326 "data_size": 63488 00:14:00.326 }, 00:14:00.326 { 00:14:00.326 "name": "BaseBdev2", 00:14:00.326 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:00.326 "is_configured": true, 00:14:00.326 "data_offset": 2048, 00:14:00.326 "data_size": 63488 00:14:00.326 } 00:14:00.326 ] 00:14:00.326 }' 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.326 12:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.892 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.893 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.893 "name": "raid_bdev1", 00:14:00.893 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:00.893 "strip_size_kb": 0, 00:14:00.893 "state": "online", 00:14:00.893 "raid_level": "raid1", 00:14:00.893 "superblock": true, 00:14:00.893 "num_base_bdevs": 2, 00:14:00.893 "num_base_bdevs_discovered": 1, 00:14:00.893 "num_base_bdevs_operational": 1, 00:14:00.893 "base_bdevs_list": [ 00:14:00.893 { 00:14:00.893 "name": null, 00:14:00.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.893 "is_configured": false, 00:14:00.893 "data_offset": 0, 00:14:00.893 "data_size": 63488 00:14:00.893 }, 00:14:00.893 { 00:14:00.893 "name": "BaseBdev2", 00:14:00.893 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:00.893 "is_configured": true, 00:14:00.893 "data_offset": 2048, 00:14:00.893 "data_size": 63488 00:14:00.893 } 00:14:00.893 ] 00:14:00.893 }' 00:14:00.893 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.893 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.893 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.158 [2024-11-06 12:44:49.605808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:01.158 [2024-11-06 12:44:49.605885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.158 [2024-11-06 12:44:49.605918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:01.158 [2024-11-06 12:44:49.605936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.158 [2024-11-06 12:44:49.606570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.158 [2024-11-06 12:44:49.606616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.158 [2024-11-06 12:44:49.606724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:01.158 [2024-11-06 12:44:49.606761] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:01.158 [2024-11-06 12:44:49.606774] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:01.158 [2024-11-06 12:44:49.606792] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:01.158 BaseBdev1 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.158 12:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.094 "name": "raid_bdev1", 00:14:02.094 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:02.094 "strip_size_kb": 0, 00:14:02.094 "state": "online", 00:14:02.094 "raid_level": "raid1", 00:14:02.094 "superblock": true, 00:14:02.094 "num_base_bdevs": 2, 00:14:02.094 "num_base_bdevs_discovered": 1, 00:14:02.094 "num_base_bdevs_operational": 1, 00:14:02.094 "base_bdevs_list": [ 00:14:02.094 { 00:14:02.094 "name": null, 00:14:02.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.094 "is_configured": false, 00:14:02.094 "data_offset": 0, 00:14:02.094 "data_size": 63488 00:14:02.094 }, 00:14:02.094 { 00:14:02.094 "name": "BaseBdev2", 00:14:02.094 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:02.094 "is_configured": true, 00:14:02.094 "data_offset": 2048, 00:14:02.094 "data_size": 63488 00:14:02.094 } 00:14:02.094 ] 00:14:02.094 }' 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.094 12:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.659 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.659 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.659 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.659 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.660 "name": "raid_bdev1", 00:14:02.660 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:02.660 "strip_size_kb": 0, 00:14:02.660 "state": "online", 00:14:02.660 "raid_level": "raid1", 00:14:02.660 "superblock": true, 00:14:02.660 "num_base_bdevs": 2, 00:14:02.660 "num_base_bdevs_discovered": 1, 00:14:02.660 "num_base_bdevs_operational": 1, 00:14:02.660 "base_bdevs_list": [ 00:14:02.660 { 00:14:02.660 "name": null, 00:14:02.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.660 "is_configured": false, 00:14:02.660 "data_offset": 0, 00:14:02.660 "data_size": 63488 00:14:02.660 }, 00:14:02.660 { 00:14:02.660 "name": "BaseBdev2", 00:14:02.660 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:02.660 "is_configured": true, 00:14:02.660 "data_offset": 2048, 00:14:02.660 "data_size": 63488 00:14:02.660 } 00:14:02.660 ] 00:14:02.660 }' 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.660 [2024-11-06 12:44:51.230592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.660 [2024-11-06 12:44:51.230979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.660 [2024-11-06 12:44:51.231013] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:02.660 request: 00:14:02.660 { 00:14:02.660 "base_bdev": "BaseBdev1", 00:14:02.660 "raid_bdev": "raid_bdev1", 00:14:02.660 "method": "bdev_raid_add_base_bdev", 00:14:02.660 "req_id": 1 00:14:02.660 } 00:14:02.660 Got JSON-RPC error response 00:14:02.660 response: 00:14:02.660 { 00:14:02.660 "code": -22, 00:14:02.660 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:02.660 } 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.660 12:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.592 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.849 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.849 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.849 "name": "raid_bdev1", 00:14:03.849 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:03.849 "strip_size_kb": 0, 00:14:03.849 "state": "online", 00:14:03.849 "raid_level": "raid1", 00:14:03.849 "superblock": true, 00:14:03.849 "num_base_bdevs": 2, 00:14:03.849 "num_base_bdevs_discovered": 1, 00:14:03.849 "num_base_bdevs_operational": 1, 00:14:03.849 "base_bdevs_list": [ 00:14:03.849 { 00:14:03.849 "name": null, 00:14:03.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.849 "is_configured": false, 00:14:03.849 "data_offset": 0, 00:14:03.849 "data_size": 63488 00:14:03.849 }, 00:14:03.849 { 00:14:03.849 "name": "BaseBdev2", 00:14:03.849 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:03.849 "is_configured": true, 00:14:03.849 "data_offset": 2048, 00:14:03.849 "data_size": 63488 00:14:03.849 } 00:14:03.849 ] 00:14:03.849 }' 00:14:03.849 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.849 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.417 "name": "raid_bdev1", 00:14:04.417 "uuid": "59a5e176-36f1-4ed7-bfa9-f5ffbd6f0c6a", 00:14:04.417 "strip_size_kb": 0, 00:14:04.417 "state": "online", 00:14:04.417 "raid_level": "raid1", 00:14:04.417 "superblock": true, 00:14:04.417 "num_base_bdevs": 2, 00:14:04.417 "num_base_bdevs_discovered": 1, 00:14:04.417 "num_base_bdevs_operational": 1, 00:14:04.417 "base_bdevs_list": [ 00:14:04.417 { 00:14:04.417 "name": null, 00:14:04.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.417 "is_configured": false, 00:14:04.417 "data_offset": 0, 00:14:04.417 "data_size": 63488 00:14:04.417 }, 00:14:04.417 { 00:14:04.417 "name": "BaseBdev2", 00:14:04.417 "uuid": "5bd05d3d-482b-5a01-9ce6-9667ca437451", 00:14:04.417 "is_configured": true, 00:14:04.417 "data_offset": 2048, 00:14:04.417 "data_size": 63488 00:14:04.417 } 00:14:04.417 ] 00:14:04.417 }' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77100 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77100 ']' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77100 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77100 00:14:04.417 killing process with pid 77100 00:14:04.417 Received shutdown signal, test time was about 18.291843 seconds 00:14:04.417 00:14:04.417 Latency(us) 00:14:04.417 [2024-11-06T12:44:53.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.417 [2024-11-06T12:44:53.074Z] =================================================================================================================== 00:14:04.417 [2024-11-06T12:44:53.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77100' 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77100 00:14:04.417 12:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77100 00:14:04.417 [2024-11-06 12:44:52.993839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.417 [2024-11-06 12:44:52.994033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.417 [2024-11-06 12:44:52.994130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.417 [2024-11-06 12:44:52.994147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:04.675 [2024-11-06 12:44:53.218464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.047 ************************************ 00:14:06.047 END TEST raid_rebuild_test_sb_io 00:14:06.047 ************************************ 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:06.047 00:14:06.047 real 0m21.766s 00:14:06.047 user 0m29.536s 00:14:06.047 sys 0m2.026s 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.047 12:44:54 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:06.047 12:44:54 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:06.047 12:44:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:06.047 12:44:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:06.047 12:44:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.047 ************************************ 00:14:06.047 START TEST raid_rebuild_test 00:14:06.047 ************************************ 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77811 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77811 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77811 ']' 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.047 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.048 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.048 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.048 12:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.048 [2024-11-06 12:44:54.566072] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:14:06.048 [2024-11-06 12:44:54.566556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77811 ] 00:14:06.048 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:06.048 Zero copy mechanism will not be used. 00:14:06.305 [2024-11-06 12:44:54.755799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.305 [2024-11-06 12:44:54.924772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.562 [2024-11-06 12:44:55.148213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.562 [2024-11-06 12:44:55.148463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 BaseBdev1_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-11-06 12:44:55.569596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.128 [2024-11-06 12:44:55.569887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-11-06 12:44:55.570056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:07.128 [2024-11-06 12:44:55.570214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 BaseBdev1 00:14:07.128 [2024-11-06 12:44:55.573963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-11-06 12:44:55.574014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 BaseBdev2_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-11-06 12:44:55.628121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:07.128 [2024-11-06 12:44:55.628364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-11-06 12:44:55.628409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:07.128 [2024-11-06 12:44:55.628436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 BaseBdev2 00:14:07.128 [2024-11-06 12:44:55.631965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-11-06 12:44:55.632014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 BaseBdev3_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-11-06 12:44:55.699136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:07.128 [2024-11-06 12:44:55.699385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-11-06 12:44:55.699488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:07.128 [2024-11-06 12:44:55.699676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 [2024-11-06 12:44:55.703451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-11-06 12:44:55.703625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:07.128 BaseBdev3 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 BaseBdev4_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-11-06 12:44:55.757548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:07.128 [2024-11-06 12:44:55.757628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-11-06 12:44:55.757662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:07.128 [2024-11-06 12:44:55.757682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 BaseBdev4 00:14:07.128 [2024-11-06 12:44:55.761212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-11-06 12:44:55.761260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.128 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 spare_malloc 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 spare_delay 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.386 [2024-11-06 12:44:55.824548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.386 [2024-11-06 12:44:55.824781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.386 [2024-11-06 12:44:55.824864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:07.386 [2024-11-06 12:44:55.824998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.386 [2024-11-06 12:44:55.828790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.386 spare 00:14:07.386 [2024-11-06 12:44:55.828980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.386 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.387 [2024-11-06 12:44:55.833331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.387 [2024-11-06 12:44:55.836668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.387 [2024-11-06 12:44:55.836897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.387 [2024-11-06 12:44:55.837037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:07.387 [2024-11-06 12:44:55.837296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:07.387 [2024-11-06 12:44:55.837399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:07.387 [2024-11-06 12:44:55.837941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:07.387 [2024-11-06 12:44:55.838373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:07.387 [2024-11-06 12:44:55.838506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:07.387 [2024-11-06 12:44:55.838969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.387 "name": "raid_bdev1", 00:14:07.387 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:07.387 "strip_size_kb": 0, 00:14:07.387 "state": "online", 00:14:07.387 "raid_level": "raid1", 00:14:07.387 "superblock": false, 00:14:07.387 "num_base_bdevs": 4, 00:14:07.387 "num_base_bdevs_discovered": 4, 00:14:07.387 "num_base_bdevs_operational": 4, 00:14:07.387 "base_bdevs_list": [ 00:14:07.387 { 00:14:07.387 "name": "BaseBdev1", 00:14:07.387 "uuid": "e64a2380-b06e-5e86-95ee-377bac734ef0", 00:14:07.387 "is_configured": true, 00:14:07.387 "data_offset": 0, 00:14:07.387 "data_size": 65536 00:14:07.387 }, 00:14:07.387 { 00:14:07.387 "name": "BaseBdev2", 00:14:07.387 "uuid": "e0e128be-4f96-50e9-9c92-7aedf0f73477", 00:14:07.387 "is_configured": true, 00:14:07.387 "data_offset": 0, 00:14:07.387 "data_size": 65536 00:14:07.387 }, 00:14:07.387 { 00:14:07.387 "name": "BaseBdev3", 00:14:07.387 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:07.387 "is_configured": true, 00:14:07.387 "data_offset": 0, 00:14:07.387 "data_size": 65536 00:14:07.387 }, 00:14:07.387 { 00:14:07.387 "name": "BaseBdev4", 00:14:07.387 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:07.387 "is_configured": true, 00:14:07.387 "data_offset": 0, 00:14:07.387 "data_size": 65536 00:14:07.387 } 00:14:07.387 ] 00:14:07.387 }' 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.387 12:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.953 [2024-11-06 12:44:56.322043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.953 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:08.210 [2024-11-06 12:44:56.749780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:08.210 /dev/nbd0 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:08.210 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.211 1+0 records in 00:14:08.211 1+0 records out 00:14:08.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366331 s, 11.2 MB/s 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:08.211 12:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:18.179 65536+0 records in 00:14:18.179 65536+0 records out 00:14:18.179 33554432 bytes (34 MB, 32 MiB) copied, 8.58427 s, 3.9 MB/s 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.179 [2024-11-06 12:45:05.696510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.179 [2024-11-06 12:45:05.732644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.179 "name": "raid_bdev1", 00:14:18.179 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:18.179 "strip_size_kb": 0, 00:14:18.179 "state": "online", 00:14:18.179 "raid_level": "raid1", 00:14:18.179 "superblock": false, 00:14:18.179 "num_base_bdevs": 4, 00:14:18.179 "num_base_bdevs_discovered": 3, 00:14:18.179 "num_base_bdevs_operational": 3, 00:14:18.179 "base_bdevs_list": [ 00:14:18.179 { 00:14:18.179 "name": null, 00:14:18.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.179 "is_configured": false, 00:14:18.179 "data_offset": 0, 00:14:18.179 "data_size": 65536 00:14:18.179 }, 00:14:18.179 { 00:14:18.179 "name": "BaseBdev2", 00:14:18.179 "uuid": "e0e128be-4f96-50e9-9c92-7aedf0f73477", 00:14:18.179 "is_configured": true, 00:14:18.179 "data_offset": 0, 00:14:18.179 "data_size": 65536 00:14:18.179 }, 00:14:18.179 { 00:14:18.179 "name": "BaseBdev3", 00:14:18.179 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:18.179 "is_configured": true, 00:14:18.179 "data_offset": 0, 00:14:18.179 "data_size": 65536 00:14:18.179 }, 00:14:18.179 { 00:14:18.179 "name": "BaseBdev4", 00:14:18.179 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:18.179 "is_configured": true, 00:14:18.179 "data_offset": 0, 00:14:18.179 "data_size": 65536 00:14:18.179 } 00:14:18.179 ] 00:14:18.179 }' 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.179 12:45:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.179 12:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.179 12:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.179 12:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.179 [2024-11-06 12:45:06.212779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.179 [2024-11-06 12:45:06.228257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:18.179 12:45:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.179 12:45:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:18.179 [2024-11-06 12:45:06.231670] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.745 "name": "raid_bdev1", 00:14:18.745 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:18.745 "strip_size_kb": 0, 00:14:18.745 "state": "online", 00:14:18.745 "raid_level": "raid1", 00:14:18.745 "superblock": false, 00:14:18.745 "num_base_bdevs": 4, 00:14:18.745 "num_base_bdevs_discovered": 4, 00:14:18.745 "num_base_bdevs_operational": 4, 00:14:18.745 "process": { 00:14:18.745 "type": "rebuild", 00:14:18.745 "target": "spare", 00:14:18.745 "progress": { 00:14:18.745 "blocks": 18432, 00:14:18.745 "percent": 28 00:14:18.745 } 00:14:18.745 }, 00:14:18.745 "base_bdevs_list": [ 00:14:18.745 { 00:14:18.745 "name": "spare", 00:14:18.745 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 }, 00:14:18.745 { 00:14:18.745 "name": "BaseBdev2", 00:14:18.745 "uuid": "e0e128be-4f96-50e9-9c92-7aedf0f73477", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 }, 00:14:18.745 { 00:14:18.745 "name": "BaseBdev3", 00:14:18.745 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 }, 00:14:18.745 { 00:14:18.745 "name": "BaseBdev4", 00:14:18.745 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:18.745 "is_configured": true, 00:14:18.745 "data_offset": 0, 00:14:18.745 "data_size": 65536 00:14:18.745 } 00:14:18.745 ] 00:14:18.745 }' 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.745 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.745 [2024-11-06 12:45:07.398283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.004 [2024-11-06 12:45:07.444515] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.004 [2024-11-06 12:45:07.445214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.004 [2024-11-06 12:45:07.445251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.004 [2024-11-06 12:45:07.445270] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.004 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.005 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.005 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.005 "name": "raid_bdev1", 00:14:19.005 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:19.005 "strip_size_kb": 0, 00:14:19.005 "state": "online", 00:14:19.005 "raid_level": "raid1", 00:14:19.005 "superblock": false, 00:14:19.005 "num_base_bdevs": 4, 00:14:19.005 "num_base_bdevs_discovered": 3, 00:14:19.005 "num_base_bdevs_operational": 3, 00:14:19.005 "base_bdevs_list": [ 00:14:19.005 { 00:14:19.005 "name": null, 00:14:19.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.005 "is_configured": false, 00:14:19.005 "data_offset": 0, 00:14:19.005 "data_size": 65536 00:14:19.005 }, 00:14:19.005 { 00:14:19.005 "name": "BaseBdev2", 00:14:19.005 "uuid": "e0e128be-4f96-50e9-9c92-7aedf0f73477", 00:14:19.005 "is_configured": true, 00:14:19.005 "data_offset": 0, 00:14:19.005 "data_size": 65536 00:14:19.005 }, 00:14:19.005 { 00:14:19.005 "name": "BaseBdev3", 00:14:19.005 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:19.005 "is_configured": true, 00:14:19.005 "data_offset": 0, 00:14:19.005 "data_size": 65536 00:14:19.005 }, 00:14:19.005 { 00:14:19.005 "name": "BaseBdev4", 00:14:19.005 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:19.005 "is_configured": true, 00:14:19.005 "data_offset": 0, 00:14:19.005 "data_size": 65536 00:14:19.005 } 00:14:19.005 ] 00:14:19.005 }' 00:14:19.005 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.005 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.571 12:45:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.571 "name": "raid_bdev1", 00:14:19.571 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:19.571 "strip_size_kb": 0, 00:14:19.571 "state": "online", 00:14:19.571 "raid_level": "raid1", 00:14:19.571 "superblock": false, 00:14:19.571 "num_base_bdevs": 4, 00:14:19.571 "num_base_bdevs_discovered": 3, 00:14:19.571 "num_base_bdevs_operational": 3, 00:14:19.571 "base_bdevs_list": [ 00:14:19.571 { 00:14:19.571 "name": null, 00:14:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.571 "is_configured": false, 00:14:19.571 "data_offset": 0, 00:14:19.571 "data_size": 65536 00:14:19.571 }, 00:14:19.571 { 00:14:19.571 "name": "BaseBdev2", 00:14:19.571 "uuid": "e0e128be-4f96-50e9-9c92-7aedf0f73477", 00:14:19.571 "is_configured": true, 00:14:19.571 "data_offset": 0, 00:14:19.571 "data_size": 65536 00:14:19.571 }, 00:14:19.571 { 00:14:19.571 "name": "BaseBdev3", 00:14:19.571 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:19.571 "is_configured": true, 00:14:19.571 "data_offset": 0, 00:14:19.571 "data_size": 65536 00:14:19.571 }, 00:14:19.571 { 00:14:19.571 "name": "BaseBdev4", 00:14:19.571 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:19.571 "is_configured": true, 00:14:19.571 "data_offset": 0, 00:14:19.571 "data_size": 65536 00:14:19.571 } 00:14:19.571 ] 00:14:19.571 }' 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.571 [2024-11-06 12:45:08.147117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.571 [2024-11-06 12:45:08.161581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.571 12:45:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:19.571 [2024-11-06 12:45:08.165121] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.946 "name": "raid_bdev1", 00:14:20.946 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:20.946 "strip_size_kb": 0, 00:14:20.946 "state": "online", 00:14:20.946 "raid_level": "raid1", 00:14:20.946 "superblock": false, 00:14:20.946 "num_base_bdevs": 4, 00:14:20.946 "num_base_bdevs_discovered": 4, 00:14:20.946 "num_base_bdevs_operational": 4, 00:14:20.946 "process": { 00:14:20.946 "type": "rebuild", 00:14:20.946 "target": "spare", 00:14:20.946 "progress": { 00:14:20.946 "blocks": 18432, 00:14:20.946 "percent": 28 00:14:20.946 } 00:14:20.946 }, 00:14:20.946 "base_bdevs_list": [ 00:14:20.946 { 00:14:20.946 "name": "spare", 00:14:20.946 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:20.946 "is_configured": true, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 }, 00:14:20.946 { 00:14:20.946 "name": "BaseBdev2", 00:14:20.946 "uuid": "e0e128be-4f96-50e9-9c92-7aedf0f73477", 00:14:20.946 "is_configured": true, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 }, 00:14:20.946 { 00:14:20.946 "name": "BaseBdev3", 00:14:20.946 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:20.946 "is_configured": true, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 }, 00:14:20.946 { 00:14:20.946 "name": "BaseBdev4", 00:14:20.946 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:20.946 "is_configured": true, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 } 00:14:20.946 ] 00:14:20.946 }' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.946 [2024-11-06 12:45:09.323189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.946 [2024-11-06 12:45:09.377367] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.946 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.946 "name": "raid_bdev1", 00:14:20.946 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:20.946 "strip_size_kb": 0, 00:14:20.946 "state": "online", 00:14:20.946 "raid_level": "raid1", 00:14:20.946 "superblock": false, 00:14:20.946 "num_base_bdevs": 4, 00:14:20.946 "num_base_bdevs_discovered": 3, 00:14:20.946 "num_base_bdevs_operational": 3, 00:14:20.946 "process": { 00:14:20.946 "type": "rebuild", 00:14:20.946 "target": "spare", 00:14:20.946 "progress": { 00:14:20.946 "blocks": 24576, 00:14:20.946 "percent": 37 00:14:20.946 } 00:14:20.946 }, 00:14:20.946 "base_bdevs_list": [ 00:14:20.946 { 00:14:20.946 "name": "spare", 00:14:20.946 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:20.946 "is_configured": true, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 }, 00:14:20.946 { 00:14:20.946 "name": null, 00:14:20.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.946 "is_configured": false, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 }, 00:14:20.946 { 00:14:20.946 "name": "BaseBdev3", 00:14:20.946 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:20.946 "is_configured": true, 00:14:20.946 "data_offset": 0, 00:14:20.946 "data_size": 65536 00:14:20.946 }, 00:14:20.946 { 00:14:20.946 "name": "BaseBdev4", 00:14:20.947 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:20.947 "is_configured": true, 00:14:20.947 "data_offset": 0, 00:14:20.947 "data_size": 65536 00:14:20.947 } 00:14:20.947 ] 00:14:20.947 }' 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=483 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.947 "name": "raid_bdev1", 00:14:20.947 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:20.947 "strip_size_kb": 0, 00:14:20.947 "state": "online", 00:14:20.947 "raid_level": "raid1", 00:14:20.947 "superblock": false, 00:14:20.947 "num_base_bdevs": 4, 00:14:20.947 "num_base_bdevs_discovered": 3, 00:14:20.947 "num_base_bdevs_operational": 3, 00:14:20.947 "process": { 00:14:20.947 "type": "rebuild", 00:14:20.947 "target": "spare", 00:14:20.947 "progress": { 00:14:20.947 "blocks": 26624, 00:14:20.947 "percent": 40 00:14:20.947 } 00:14:20.947 }, 00:14:20.947 "base_bdevs_list": [ 00:14:20.947 { 00:14:20.947 "name": "spare", 00:14:20.947 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:20.947 "is_configured": true, 00:14:20.947 "data_offset": 0, 00:14:20.947 "data_size": 65536 00:14:20.947 }, 00:14:20.947 { 00:14:20.947 "name": null, 00:14:20.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.947 "is_configured": false, 00:14:20.947 "data_offset": 0, 00:14:20.947 "data_size": 65536 00:14:20.947 }, 00:14:20.947 { 00:14:20.947 "name": "BaseBdev3", 00:14:20.947 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:20.947 "is_configured": true, 00:14:20.947 "data_offset": 0, 00:14:20.947 "data_size": 65536 00:14:20.947 }, 00:14:20.947 { 00:14:20.947 "name": "BaseBdev4", 00:14:20.947 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:20.947 "is_configured": true, 00:14:20.947 "data_offset": 0, 00:14:20.947 "data_size": 65536 00:14:20.947 } 00:14:20.947 ] 00:14:20.947 }' 00:14:20.947 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.203 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.204 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.204 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.204 12:45:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.138 "name": "raid_bdev1", 00:14:22.138 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:22.138 "strip_size_kb": 0, 00:14:22.138 "state": "online", 00:14:22.138 "raid_level": "raid1", 00:14:22.138 "superblock": false, 00:14:22.138 "num_base_bdevs": 4, 00:14:22.138 "num_base_bdevs_discovered": 3, 00:14:22.138 "num_base_bdevs_operational": 3, 00:14:22.138 "process": { 00:14:22.138 "type": "rebuild", 00:14:22.138 "target": "spare", 00:14:22.138 "progress": { 00:14:22.138 "blocks": 51200, 00:14:22.138 "percent": 78 00:14:22.138 } 00:14:22.138 }, 00:14:22.138 "base_bdevs_list": [ 00:14:22.138 { 00:14:22.138 "name": "spare", 00:14:22.138 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 0, 00:14:22.138 "data_size": 65536 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "name": null, 00:14:22.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.138 "is_configured": false, 00:14:22.138 "data_offset": 0, 00:14:22.138 "data_size": 65536 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "name": "BaseBdev3", 00:14:22.138 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 0, 00:14:22.138 "data_size": 65536 00:14:22.138 }, 00:14:22.138 { 00:14:22.138 "name": "BaseBdev4", 00:14:22.138 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:22.138 "is_configured": true, 00:14:22.138 "data_offset": 0, 00:14:22.138 "data_size": 65536 00:14:22.138 } 00:14:22.138 ] 00:14:22.138 }' 00:14:22.138 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.397 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.397 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.397 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.397 12:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.968 [2024-11-06 12:45:11.396649] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:22.968 [2024-11-06 12:45:11.397061] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:22.968 [2024-11-06 12:45:11.397839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.226 12:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 12:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.484 "name": "raid_bdev1", 00:14:23.484 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:23.484 "strip_size_kb": 0, 00:14:23.484 "state": "online", 00:14:23.484 "raid_level": "raid1", 00:14:23.484 "superblock": false, 00:14:23.484 "num_base_bdevs": 4, 00:14:23.484 "num_base_bdevs_discovered": 3, 00:14:23.484 "num_base_bdevs_operational": 3, 00:14:23.484 "base_bdevs_list": [ 00:14:23.484 { 00:14:23.484 "name": "spare", 00:14:23.484 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": null, 00:14:23.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.484 "is_configured": false, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev3", 00:14:23.484 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev4", 00:14:23.484 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 } 00:14:23.484 ] 00:14:23.484 }' 00:14:23.484 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.484 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.484 12:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.484 "name": "raid_bdev1", 00:14:23.484 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:23.484 "strip_size_kb": 0, 00:14:23.484 "state": "online", 00:14:23.484 "raid_level": "raid1", 00:14:23.484 "superblock": false, 00:14:23.484 "num_base_bdevs": 4, 00:14:23.484 "num_base_bdevs_discovered": 3, 00:14:23.484 "num_base_bdevs_operational": 3, 00:14:23.484 "base_bdevs_list": [ 00:14:23.484 { 00:14:23.484 "name": "spare", 00:14:23.484 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": null, 00:14:23.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.484 "is_configured": false, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev3", 00:14:23.484 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev4", 00:14:23.484 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 } 00:14:23.484 ] 00:14:23.484 }' 00:14:23.484 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.742 "name": "raid_bdev1", 00:14:23.742 "uuid": "8dd605fe-a30c-49e5-8883-6cd43cad7578", 00:14:23.742 "strip_size_kb": 0, 00:14:23.742 "state": "online", 00:14:23.742 "raid_level": "raid1", 00:14:23.742 "superblock": false, 00:14:23.742 "num_base_bdevs": 4, 00:14:23.742 "num_base_bdevs_discovered": 3, 00:14:23.742 "num_base_bdevs_operational": 3, 00:14:23.742 "base_bdevs_list": [ 00:14:23.742 { 00:14:23.742 "name": "spare", 00:14:23.742 "uuid": "de9ae33c-3934-56c0-ab86-9eef2a5c04ed", 00:14:23.742 "is_configured": true, 00:14:23.742 "data_offset": 0, 00:14:23.742 "data_size": 65536 00:14:23.742 }, 00:14:23.742 { 00:14:23.742 "name": null, 00:14:23.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.742 "is_configured": false, 00:14:23.742 "data_offset": 0, 00:14:23.742 "data_size": 65536 00:14:23.742 }, 00:14:23.742 { 00:14:23.742 "name": "BaseBdev3", 00:14:23.742 "uuid": "81ca4e4d-3937-5475-aa33-20913bc29ed5", 00:14:23.742 "is_configured": true, 00:14:23.742 "data_offset": 0, 00:14:23.742 "data_size": 65536 00:14:23.742 }, 00:14:23.742 { 00:14:23.742 "name": "BaseBdev4", 00:14:23.742 "uuid": "661dfc1a-ce70-5938-9267-41c07d4690d6", 00:14:23.742 "is_configured": true, 00:14:23.742 "data_offset": 0, 00:14:23.742 "data_size": 65536 00:14:23.742 } 00:14:23.742 ] 00:14:23.742 }' 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.742 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 [2024-11-06 12:45:12.699715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.309 [2024-11-06 12:45:12.699913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.309 [2024-11-06 12:45:12.700058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.309 [2024-11-06 12:45:12.700217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.309 [2024-11-06 12:45:12.700238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.309 12:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.567 /dev/nbd0 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:24.567 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.567 1+0 records in 00:14:24.568 1+0 records out 00:14:24.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428231 s, 9.6 MB/s 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.568 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:24.826 /dev/nbd1 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.826 1+0 records in 00:14:24.826 1+0 records out 00:14:24.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416547 s, 9.8 MB/s 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.826 12:45:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.085 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.343 12:45:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77811 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77811 ']' 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77811 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77811 00:14:25.911 killing process with pid 77811 00:14:25.911 Received shutdown signal, test time was about 60.000000 seconds 00:14:25.911 00:14:25.911 Latency(us) 00:14:25.911 [2024-11-06T12:45:14.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.911 [2024-11-06T12:45:14.568Z] =================================================================================================================== 00:14:25.911 [2024-11-06T12:45:14.568Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77811' 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77811 00:14:25.911 [2024-11-06 12:45:14.328701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.911 12:45:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77811 00:14:26.170 [2024-11-06 12:45:14.814789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.545 12:45:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:27.545 00:14:27.545 real 0m21.492s 00:14:27.545 user 0m24.006s 00:14:27.545 sys 0m3.786s 00:14:27.545 12:45:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:27.545 ************************************ 00:14:27.545 END TEST raid_rebuild_test 00:14:27.545 12:45:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.545 ************************************ 00:14:27.545 12:45:15 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:27.545 12:45:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:27.545 12:45:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:27.545 12:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.545 ************************************ 00:14:27.545 START TEST raid_rebuild_test_sb 00:14:27.545 ************************************ 00:14:27.545 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78301 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78301 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78301 ']' 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:27.546 12:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.546 [2024-11-06 12:45:16.138320] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:14:27.546 [2024-11-06 12:45:16.138792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78301 ] 00:14:27.546 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.546 Zero copy mechanism will not be used. 00:14:27.804 [2024-11-06 12:45:16.331124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.061 [2024-11-06 12:45:16.504720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.319 [2024-11-06 12:45:16.757354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.319 [2024-11-06 12:45:16.757669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.577 BaseBdev1_malloc 00:14:28.577 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 [2024-11-06 12:45:17.233845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:28.879 [2024-11-06 12:45:17.234183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.879 [2024-11-06 12:45:17.234302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:28.879 [2024-11-06 12:45:17.234556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.879 [2024-11-06 12:45:17.237691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.879 [2024-11-06 12:45:17.237745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.879 BaseBdev1 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 BaseBdev2_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 [2024-11-06 12:45:17.301841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:28.879 [2024-11-06 12:45:17.301969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.879 [2024-11-06 12:45:17.302007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:28.879 [2024-11-06 12:45:17.302032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.879 [2024-11-06 12:45:17.305153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.879 BaseBdev2 00:14:28.879 [2024-11-06 12:45:17.305454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 BaseBdev3_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 [2024-11-06 12:45:17.379688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:28.879 [2024-11-06 12:45:17.380043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.879 [2024-11-06 12:45:17.380256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:28.879 [2024-11-06 12:45:17.380348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.879 [2024-11-06 12:45:17.383896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.879 [2024-11-06 12:45:17.383963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:28.879 BaseBdev3 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 BaseBdev4_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 [2024-11-06 12:45:17.441113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:28.879 [2024-11-06 12:45:17.441371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.879 [2024-11-06 12:45:17.441461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:28.879 [2024-11-06 12:45:17.441628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.879 [2024-11-06 12:45:17.445093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.879 [2024-11-06 12:45:17.445159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:28.879 BaseBdev4 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 spare_malloc 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 spare_delay 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 [2024-11-06 12:45:17.509766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.879 [2024-11-06 12:45:17.510017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.879 [2024-11-06 12:45:17.510069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:28.879 [2024-11-06 12:45:17.510093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.879 [2024-11-06 12:45:17.513725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.879 spare 00:14:28.879 [2024-11-06 12:45:17.513948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 [2024-11-06 12:45:17.518288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.879 [2024-11-06 12:45:17.521362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.879 [2024-11-06 12:45:17.521492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.879 [2024-11-06 12:45:17.521602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:28.879 [2024-11-06 12:45:17.521925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:28.879 [2024-11-06 12:45:17.521954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.879 [2024-11-06 12:45:17.522412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:28.879 [2024-11-06 12:45:17.522749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:28.879 [2024-11-06 12:45:17.522784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:28.879 [2024-11-06 12:45:17.523128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.879 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.138 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.138 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.138 "name": "raid_bdev1", 00:14:29.138 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:29.138 "strip_size_kb": 0, 00:14:29.138 "state": "online", 00:14:29.138 "raid_level": "raid1", 00:14:29.138 "superblock": true, 00:14:29.138 "num_base_bdevs": 4, 00:14:29.138 "num_base_bdevs_discovered": 4, 00:14:29.138 "num_base_bdevs_operational": 4, 00:14:29.138 "base_bdevs_list": [ 00:14:29.138 { 00:14:29.138 "name": "BaseBdev1", 00:14:29.138 "uuid": "97f38571-599c-589c-8f21-142e06f2a252", 00:14:29.138 "is_configured": true, 00:14:29.138 "data_offset": 2048, 00:14:29.138 "data_size": 63488 00:14:29.138 }, 00:14:29.138 { 00:14:29.138 "name": "BaseBdev2", 00:14:29.138 "uuid": "e3bedb91-eb5d-54fb-858f-62c9472fd1c9", 00:14:29.138 "is_configured": true, 00:14:29.138 "data_offset": 2048, 00:14:29.138 "data_size": 63488 00:14:29.138 }, 00:14:29.138 { 00:14:29.138 "name": "BaseBdev3", 00:14:29.138 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:29.138 "is_configured": true, 00:14:29.138 "data_offset": 2048, 00:14:29.138 "data_size": 63488 00:14:29.138 }, 00:14:29.138 { 00:14:29.138 "name": "BaseBdev4", 00:14:29.138 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:29.138 "is_configured": true, 00:14:29.138 "data_offset": 2048, 00:14:29.138 "data_size": 63488 00:14:29.138 } 00:14:29.138 ] 00:14:29.138 }' 00:14:29.138 12:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.138 12:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.396 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.396 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:29.396 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.396 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.396 [2024-11-06 12:45:18.047630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.654 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:29.912 [2024-11-06 12:45:18.447399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:29.912 /dev/nbd0 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.912 1+0 records in 00:14:29.912 1+0 records out 00:14:29.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393747 s, 10.4 MB/s 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:29.912 12:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:39.890 63488+0 records in 00:14:39.890 63488+0 records out 00:14:39.890 32505856 bytes (33 MB, 31 MiB) copied, 8.35085 s, 3.9 MB/s 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.890 12:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.890 [2024-11-06 12:45:27.173054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.890 [2024-11-06 12:45:27.213154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.890 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.890 "name": "raid_bdev1", 00:14:39.890 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:39.890 "strip_size_kb": 0, 00:14:39.890 "state": "online", 00:14:39.890 "raid_level": "raid1", 00:14:39.890 "superblock": true, 00:14:39.890 "num_base_bdevs": 4, 00:14:39.890 "num_base_bdevs_discovered": 3, 00:14:39.890 "num_base_bdevs_operational": 3, 00:14:39.890 "base_bdevs_list": [ 00:14:39.890 { 00:14:39.890 "name": null, 00:14:39.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.890 "is_configured": false, 00:14:39.890 "data_offset": 0, 00:14:39.890 "data_size": 63488 00:14:39.890 }, 00:14:39.891 { 00:14:39.891 "name": "BaseBdev2", 00:14:39.891 "uuid": "e3bedb91-eb5d-54fb-858f-62c9472fd1c9", 00:14:39.891 "is_configured": true, 00:14:39.891 "data_offset": 2048, 00:14:39.891 "data_size": 63488 00:14:39.891 }, 00:14:39.891 { 00:14:39.891 "name": "BaseBdev3", 00:14:39.891 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:39.891 "is_configured": true, 00:14:39.891 "data_offset": 2048, 00:14:39.891 "data_size": 63488 00:14:39.891 }, 00:14:39.891 { 00:14:39.891 "name": "BaseBdev4", 00:14:39.891 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:39.891 "is_configured": true, 00:14:39.891 "data_offset": 2048, 00:14:39.891 "data_size": 63488 00:14:39.891 } 00:14:39.891 ] 00:14:39.891 }' 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.891 [2024-11-06 12:45:27.725303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.891 [2024-11-06 12:45:27.739753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.891 12:45:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:39.891 [2024-11-06 12:45:27.742246] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.149 "name": "raid_bdev1", 00:14:40.149 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:40.149 "strip_size_kb": 0, 00:14:40.149 "state": "online", 00:14:40.149 "raid_level": "raid1", 00:14:40.149 "superblock": true, 00:14:40.149 "num_base_bdevs": 4, 00:14:40.149 "num_base_bdevs_discovered": 4, 00:14:40.149 "num_base_bdevs_operational": 4, 00:14:40.149 "process": { 00:14:40.149 "type": "rebuild", 00:14:40.149 "target": "spare", 00:14:40.149 "progress": { 00:14:40.149 "blocks": 20480, 00:14:40.149 "percent": 32 00:14:40.149 } 00:14:40.149 }, 00:14:40.149 "base_bdevs_list": [ 00:14:40.149 { 00:14:40.149 "name": "spare", 00:14:40.149 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 2048, 00:14:40.149 "data_size": 63488 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": "BaseBdev2", 00:14:40.149 "uuid": "e3bedb91-eb5d-54fb-858f-62c9472fd1c9", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 2048, 00:14:40.149 "data_size": 63488 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": "BaseBdev3", 00:14:40.149 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 2048, 00:14:40.149 "data_size": 63488 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": "BaseBdev4", 00:14:40.149 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 2048, 00:14:40.149 "data_size": 63488 00:14:40.149 } 00:14:40.149 ] 00:14:40.149 }' 00:14:40.149 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.408 [2024-11-06 12:45:28.903358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.408 [2024-11-06 12:45:28.951384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.408 [2024-11-06 12:45:28.951501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.408 [2024-11-06 12:45:28.951530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.408 [2024-11-06 12:45:28.951546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.408 12:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.408 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.408 "name": "raid_bdev1", 00:14:40.408 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:40.408 "strip_size_kb": 0, 00:14:40.408 "state": "online", 00:14:40.408 "raid_level": "raid1", 00:14:40.408 "superblock": true, 00:14:40.408 "num_base_bdevs": 4, 00:14:40.408 "num_base_bdevs_discovered": 3, 00:14:40.408 "num_base_bdevs_operational": 3, 00:14:40.408 "base_bdevs_list": [ 00:14:40.408 { 00:14:40.408 "name": null, 00:14:40.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.408 "is_configured": false, 00:14:40.408 "data_offset": 0, 00:14:40.408 "data_size": 63488 00:14:40.408 }, 00:14:40.408 { 00:14:40.408 "name": "BaseBdev2", 00:14:40.408 "uuid": "e3bedb91-eb5d-54fb-858f-62c9472fd1c9", 00:14:40.408 "is_configured": true, 00:14:40.408 "data_offset": 2048, 00:14:40.408 "data_size": 63488 00:14:40.408 }, 00:14:40.408 { 00:14:40.408 "name": "BaseBdev3", 00:14:40.408 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:40.408 "is_configured": true, 00:14:40.408 "data_offset": 2048, 00:14:40.408 "data_size": 63488 00:14:40.408 }, 00:14:40.408 { 00:14:40.408 "name": "BaseBdev4", 00:14:40.408 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:40.408 "is_configured": true, 00:14:40.408 "data_offset": 2048, 00:14:40.408 "data_size": 63488 00:14:40.408 } 00:14:40.408 ] 00:14:40.408 }' 00:14:40.408 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.408 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.975 "name": "raid_bdev1", 00:14:40.975 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:40.975 "strip_size_kb": 0, 00:14:40.975 "state": "online", 00:14:40.975 "raid_level": "raid1", 00:14:40.975 "superblock": true, 00:14:40.975 "num_base_bdevs": 4, 00:14:40.975 "num_base_bdevs_discovered": 3, 00:14:40.975 "num_base_bdevs_operational": 3, 00:14:40.975 "base_bdevs_list": [ 00:14:40.975 { 00:14:40.975 "name": null, 00:14:40.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.975 "is_configured": false, 00:14:40.975 "data_offset": 0, 00:14:40.975 "data_size": 63488 00:14:40.975 }, 00:14:40.975 { 00:14:40.975 "name": "BaseBdev2", 00:14:40.975 "uuid": "e3bedb91-eb5d-54fb-858f-62c9472fd1c9", 00:14:40.975 "is_configured": true, 00:14:40.975 "data_offset": 2048, 00:14:40.975 "data_size": 63488 00:14:40.975 }, 00:14:40.975 { 00:14:40.975 "name": "BaseBdev3", 00:14:40.975 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:40.975 "is_configured": true, 00:14:40.975 "data_offset": 2048, 00:14:40.975 "data_size": 63488 00:14:40.975 }, 00:14:40.975 { 00:14:40.975 "name": "BaseBdev4", 00:14:40.975 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:40.975 "is_configured": true, 00:14:40.975 "data_offset": 2048, 00:14:40.975 "data_size": 63488 00:14:40.975 } 00:14:40.975 ] 00:14:40.975 }' 00:14:40.975 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.233 [2024-11-06 12:45:29.687696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.233 [2024-11-06 12:45:29.702496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.233 12:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:41.233 [2024-11-06 12:45:29.705217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.168 "name": "raid_bdev1", 00:14:42.168 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:42.168 "strip_size_kb": 0, 00:14:42.168 "state": "online", 00:14:42.168 "raid_level": "raid1", 00:14:42.168 "superblock": true, 00:14:42.168 "num_base_bdevs": 4, 00:14:42.168 "num_base_bdevs_discovered": 4, 00:14:42.168 "num_base_bdevs_operational": 4, 00:14:42.168 "process": { 00:14:42.168 "type": "rebuild", 00:14:42.168 "target": "spare", 00:14:42.168 "progress": { 00:14:42.168 "blocks": 20480, 00:14:42.168 "percent": 32 00:14:42.168 } 00:14:42.168 }, 00:14:42.168 "base_bdevs_list": [ 00:14:42.168 { 00:14:42.168 "name": "spare", 00:14:42.168 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:42.168 "is_configured": true, 00:14:42.168 "data_offset": 2048, 00:14:42.168 "data_size": 63488 00:14:42.168 }, 00:14:42.168 { 00:14:42.168 "name": "BaseBdev2", 00:14:42.168 "uuid": "e3bedb91-eb5d-54fb-858f-62c9472fd1c9", 00:14:42.168 "is_configured": true, 00:14:42.168 "data_offset": 2048, 00:14:42.168 "data_size": 63488 00:14:42.168 }, 00:14:42.168 { 00:14:42.168 "name": "BaseBdev3", 00:14:42.168 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:42.168 "is_configured": true, 00:14:42.168 "data_offset": 2048, 00:14:42.168 "data_size": 63488 00:14:42.168 }, 00:14:42.168 { 00:14:42.168 "name": "BaseBdev4", 00:14:42.168 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:42.168 "is_configured": true, 00:14:42.168 "data_offset": 2048, 00:14:42.168 "data_size": 63488 00:14:42.168 } 00:14:42.168 ] 00:14:42.168 }' 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.168 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:42.426 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.426 12:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.426 [2024-11-06 12:45:30.870984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.426 [2024-11-06 12:45:31.016709] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.426 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.426 "name": "raid_bdev1", 00:14:42.426 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:42.426 "strip_size_kb": 0, 00:14:42.426 "state": "online", 00:14:42.426 "raid_level": "raid1", 00:14:42.426 "superblock": true, 00:14:42.426 "num_base_bdevs": 4, 00:14:42.426 "num_base_bdevs_discovered": 3, 00:14:42.426 "num_base_bdevs_operational": 3, 00:14:42.426 "process": { 00:14:42.426 "type": "rebuild", 00:14:42.426 "target": "spare", 00:14:42.426 "progress": { 00:14:42.426 "blocks": 24576, 00:14:42.426 "percent": 38 00:14:42.426 } 00:14:42.426 }, 00:14:42.426 "base_bdevs_list": [ 00:14:42.426 { 00:14:42.426 "name": "spare", 00:14:42.426 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:42.426 "is_configured": true, 00:14:42.426 "data_offset": 2048, 00:14:42.426 "data_size": 63488 00:14:42.426 }, 00:14:42.426 { 00:14:42.426 "name": null, 00:14:42.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.426 "is_configured": false, 00:14:42.426 "data_offset": 0, 00:14:42.426 "data_size": 63488 00:14:42.426 }, 00:14:42.426 { 00:14:42.426 "name": "BaseBdev3", 00:14:42.426 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:42.426 "is_configured": true, 00:14:42.426 "data_offset": 2048, 00:14:42.426 "data_size": 63488 00:14:42.426 }, 00:14:42.426 { 00:14:42.426 "name": "BaseBdev4", 00:14:42.426 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:42.426 "is_configured": true, 00:14:42.426 "data_offset": 2048, 00:14:42.427 "data_size": 63488 00:14:42.427 } 00:14:42.427 ] 00:14:42.427 }' 00:14:42.427 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.685 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.685 "name": "raid_bdev1", 00:14:42.685 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:42.685 "strip_size_kb": 0, 00:14:42.685 "state": "online", 00:14:42.685 "raid_level": "raid1", 00:14:42.685 "superblock": true, 00:14:42.685 "num_base_bdevs": 4, 00:14:42.685 "num_base_bdevs_discovered": 3, 00:14:42.685 "num_base_bdevs_operational": 3, 00:14:42.685 "process": { 00:14:42.685 "type": "rebuild", 00:14:42.685 "target": "spare", 00:14:42.685 "progress": { 00:14:42.685 "blocks": 26624, 00:14:42.685 "percent": 41 00:14:42.685 } 00:14:42.685 }, 00:14:42.685 "base_bdevs_list": [ 00:14:42.685 { 00:14:42.685 "name": "spare", 00:14:42.685 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:42.685 "is_configured": true, 00:14:42.685 "data_offset": 2048, 00:14:42.685 "data_size": 63488 00:14:42.685 }, 00:14:42.685 { 00:14:42.685 "name": null, 00:14:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.685 "is_configured": false, 00:14:42.685 "data_offset": 0, 00:14:42.685 "data_size": 63488 00:14:42.685 }, 00:14:42.685 { 00:14:42.685 "name": "BaseBdev3", 00:14:42.685 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:42.686 "is_configured": true, 00:14:42.686 "data_offset": 2048, 00:14:42.686 "data_size": 63488 00:14:42.686 }, 00:14:42.686 { 00:14:42.686 "name": "BaseBdev4", 00:14:42.686 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:42.686 "is_configured": true, 00:14:42.686 "data_offset": 2048, 00:14:42.686 "data_size": 63488 00:14:42.686 } 00:14:42.686 ] 00:14:42.686 }' 00:14:42.686 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.686 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.686 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.686 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.686 12:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.060 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.060 "name": "raid_bdev1", 00:14:44.060 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:44.060 "strip_size_kb": 0, 00:14:44.060 "state": "online", 00:14:44.060 "raid_level": "raid1", 00:14:44.060 "superblock": true, 00:14:44.060 "num_base_bdevs": 4, 00:14:44.060 "num_base_bdevs_discovered": 3, 00:14:44.060 "num_base_bdevs_operational": 3, 00:14:44.060 "process": { 00:14:44.060 "type": "rebuild", 00:14:44.060 "target": "spare", 00:14:44.060 "progress": { 00:14:44.060 "blocks": 51200, 00:14:44.060 "percent": 80 00:14:44.060 } 00:14:44.060 }, 00:14:44.060 "base_bdevs_list": [ 00:14:44.060 { 00:14:44.060 "name": "spare", 00:14:44.060 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:44.060 "is_configured": true, 00:14:44.060 "data_offset": 2048, 00:14:44.060 "data_size": 63488 00:14:44.060 }, 00:14:44.060 { 00:14:44.060 "name": null, 00:14:44.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.061 "is_configured": false, 00:14:44.061 "data_offset": 0, 00:14:44.061 "data_size": 63488 00:14:44.061 }, 00:14:44.061 { 00:14:44.061 "name": "BaseBdev3", 00:14:44.061 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:44.061 "is_configured": true, 00:14:44.061 "data_offset": 2048, 00:14:44.061 "data_size": 63488 00:14:44.061 }, 00:14:44.061 { 00:14:44.061 "name": "BaseBdev4", 00:14:44.061 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:44.061 "is_configured": true, 00:14:44.061 "data_offset": 2048, 00:14:44.061 "data_size": 63488 00:14:44.061 } 00:14:44.061 ] 00:14:44.061 }' 00:14:44.061 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.061 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.061 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.061 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.061 12:45:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.319 [2024-11-06 12:45:32.935699] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:44.319 [2024-11-06 12:45:32.935832] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:44.319 [2024-11-06 12:45:32.936061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.887 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.887 "name": "raid_bdev1", 00:14:44.887 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:44.887 "strip_size_kb": 0, 00:14:44.887 "state": "online", 00:14:44.887 "raid_level": "raid1", 00:14:44.887 "superblock": true, 00:14:44.887 "num_base_bdevs": 4, 00:14:44.887 "num_base_bdevs_discovered": 3, 00:14:44.887 "num_base_bdevs_operational": 3, 00:14:44.887 "base_bdevs_list": [ 00:14:44.887 { 00:14:44.887 "name": "spare", 00:14:44.887 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:44.887 "is_configured": true, 00:14:44.887 "data_offset": 2048, 00:14:44.887 "data_size": 63488 00:14:44.887 }, 00:14:44.887 { 00:14:44.887 "name": null, 00:14:44.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.887 "is_configured": false, 00:14:44.887 "data_offset": 0, 00:14:44.887 "data_size": 63488 00:14:44.887 }, 00:14:44.887 { 00:14:44.887 "name": "BaseBdev3", 00:14:44.887 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:44.887 "is_configured": true, 00:14:44.887 "data_offset": 2048, 00:14:44.887 "data_size": 63488 00:14:44.887 }, 00:14:44.887 { 00:14:44.887 "name": "BaseBdev4", 00:14:44.887 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:44.887 "is_configured": true, 00:14:44.887 "data_offset": 2048, 00:14:44.887 "data_size": 63488 00:14:44.887 } 00:14:44.887 ] 00:14:44.887 }' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.145 "name": "raid_bdev1", 00:14:45.145 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:45.145 "strip_size_kb": 0, 00:14:45.145 "state": "online", 00:14:45.145 "raid_level": "raid1", 00:14:45.145 "superblock": true, 00:14:45.145 "num_base_bdevs": 4, 00:14:45.145 "num_base_bdevs_discovered": 3, 00:14:45.145 "num_base_bdevs_operational": 3, 00:14:45.145 "base_bdevs_list": [ 00:14:45.145 { 00:14:45.145 "name": "spare", 00:14:45.145 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:45.145 "is_configured": true, 00:14:45.145 "data_offset": 2048, 00:14:45.145 "data_size": 63488 00:14:45.145 }, 00:14:45.145 { 00:14:45.145 "name": null, 00:14:45.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.145 "is_configured": false, 00:14:45.145 "data_offset": 0, 00:14:45.145 "data_size": 63488 00:14:45.145 }, 00:14:45.145 { 00:14:45.145 "name": "BaseBdev3", 00:14:45.145 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:45.145 "is_configured": true, 00:14:45.145 "data_offset": 2048, 00:14:45.145 "data_size": 63488 00:14:45.145 }, 00:14:45.145 { 00:14:45.145 "name": "BaseBdev4", 00:14:45.145 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:45.145 "is_configured": true, 00:14:45.145 "data_offset": 2048, 00:14:45.145 "data_size": 63488 00:14:45.145 } 00:14:45.145 ] 00:14:45.145 }' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.145 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.404 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.404 "name": "raid_bdev1", 00:14:45.404 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:45.404 "strip_size_kb": 0, 00:14:45.404 "state": "online", 00:14:45.404 "raid_level": "raid1", 00:14:45.404 "superblock": true, 00:14:45.404 "num_base_bdevs": 4, 00:14:45.404 "num_base_bdevs_discovered": 3, 00:14:45.404 "num_base_bdevs_operational": 3, 00:14:45.404 "base_bdevs_list": [ 00:14:45.404 { 00:14:45.404 "name": "spare", 00:14:45.404 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:45.404 "is_configured": true, 00:14:45.404 "data_offset": 2048, 00:14:45.404 "data_size": 63488 00:14:45.404 }, 00:14:45.404 { 00:14:45.404 "name": null, 00:14:45.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.404 "is_configured": false, 00:14:45.404 "data_offset": 0, 00:14:45.404 "data_size": 63488 00:14:45.404 }, 00:14:45.404 { 00:14:45.404 "name": "BaseBdev3", 00:14:45.404 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:45.404 "is_configured": true, 00:14:45.404 "data_offset": 2048, 00:14:45.404 "data_size": 63488 00:14:45.404 }, 00:14:45.404 { 00:14:45.404 "name": "BaseBdev4", 00:14:45.404 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:45.404 "is_configured": true, 00:14:45.405 "data_offset": 2048, 00:14:45.405 "data_size": 63488 00:14:45.405 } 00:14:45.405 ] 00:14:45.405 }' 00:14:45.405 12:45:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.405 12:45:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.662 [2024-11-06 12:45:34.283010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.662 [2024-11-06 12:45:34.283078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.662 [2024-11-06 12:45:34.283188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.662 [2024-11-06 12:45:34.283357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.662 [2024-11-06 12:45:34.283377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.662 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.663 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:45.663 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.921 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:46.180 /dev/nbd0 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.180 1+0 records in 00:14:46.180 1+0 records out 00:14:46.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265181 s, 15.4 MB/s 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:46.180 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:46.439 /dev/nbd1 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.439 1+0 records in 00:14:46.439 1+0 records out 00:14:46.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397829 s, 10.3 MB/s 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:46.439 12:45:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.439 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:46.439 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.697 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.956 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.215 [2024-11-06 12:45:35.845236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.215 [2024-11-06 12:45:35.845304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.215 [2024-11-06 12:45:35.845338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:47.215 [2024-11-06 12:45:35.845354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.215 [2024-11-06 12:45:35.848318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.215 [2024-11-06 12:45:35.848364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.215 [2024-11-06 12:45:35.848479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.215 [2024-11-06 12:45:35.848554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.215 [2024-11-06 12:45:35.848734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.215 [2024-11-06 12:45:35.848880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.215 spare 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.215 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.474 [2024-11-06 12:45:35.949004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:47.474 [2024-11-06 12:45:35.949170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.474 [2024-11-06 12:45:35.949626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:47.474 [2024-11-06 12:45:35.949989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:47.474 [2024-11-06 12:45:35.950130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:47.474 [2024-11-06 12:45:35.950470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.474 12:45:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.474 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.474 "name": "raid_bdev1", 00:14:47.474 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:47.474 "strip_size_kb": 0, 00:14:47.474 "state": "online", 00:14:47.474 "raid_level": "raid1", 00:14:47.474 "superblock": true, 00:14:47.474 "num_base_bdevs": 4, 00:14:47.474 "num_base_bdevs_discovered": 3, 00:14:47.474 "num_base_bdevs_operational": 3, 00:14:47.474 "base_bdevs_list": [ 00:14:47.474 { 00:14:47.474 "name": "spare", 00:14:47.474 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:47.474 "is_configured": true, 00:14:47.474 "data_offset": 2048, 00:14:47.474 "data_size": 63488 00:14:47.474 }, 00:14:47.474 { 00:14:47.474 "name": null, 00:14:47.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.474 "is_configured": false, 00:14:47.474 "data_offset": 2048, 00:14:47.474 "data_size": 63488 00:14:47.474 }, 00:14:47.474 { 00:14:47.474 "name": "BaseBdev3", 00:14:47.474 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:47.474 "is_configured": true, 00:14:47.474 "data_offset": 2048, 00:14:47.474 "data_size": 63488 00:14:47.474 }, 00:14:47.474 { 00:14:47.474 "name": "BaseBdev4", 00:14:47.474 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:47.474 "is_configured": true, 00:14:47.474 "data_offset": 2048, 00:14:47.474 "data_size": 63488 00:14:47.474 } 00:14:47.474 ] 00:14:47.474 }' 00:14:47.474 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.474 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.041 "name": "raid_bdev1", 00:14:48.041 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:48.041 "strip_size_kb": 0, 00:14:48.041 "state": "online", 00:14:48.041 "raid_level": "raid1", 00:14:48.041 "superblock": true, 00:14:48.041 "num_base_bdevs": 4, 00:14:48.041 "num_base_bdevs_discovered": 3, 00:14:48.041 "num_base_bdevs_operational": 3, 00:14:48.041 "base_bdevs_list": [ 00:14:48.041 { 00:14:48.041 "name": "spare", 00:14:48.041 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:48.041 "is_configured": true, 00:14:48.041 "data_offset": 2048, 00:14:48.041 "data_size": 63488 00:14:48.041 }, 00:14:48.041 { 00:14:48.041 "name": null, 00:14:48.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.041 "is_configured": false, 00:14:48.041 "data_offset": 2048, 00:14:48.041 "data_size": 63488 00:14:48.041 }, 00:14:48.041 { 00:14:48.041 "name": "BaseBdev3", 00:14:48.041 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:48.041 "is_configured": true, 00:14:48.041 "data_offset": 2048, 00:14:48.041 "data_size": 63488 00:14:48.041 }, 00:14:48.041 { 00:14:48.041 "name": "BaseBdev4", 00:14:48.041 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:48.041 "is_configured": true, 00:14:48.041 "data_offset": 2048, 00:14:48.041 "data_size": 63488 00:14:48.041 } 00:14:48.041 ] 00:14:48.041 }' 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 [2024-11-06 12:45:36.666664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.300 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.300 "name": "raid_bdev1", 00:14:48.300 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:48.300 "strip_size_kb": 0, 00:14:48.300 "state": "online", 00:14:48.300 "raid_level": "raid1", 00:14:48.300 "superblock": true, 00:14:48.300 "num_base_bdevs": 4, 00:14:48.300 "num_base_bdevs_discovered": 2, 00:14:48.300 "num_base_bdevs_operational": 2, 00:14:48.300 "base_bdevs_list": [ 00:14:48.300 { 00:14:48.300 "name": null, 00:14:48.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.300 "is_configured": false, 00:14:48.300 "data_offset": 0, 00:14:48.300 "data_size": 63488 00:14:48.300 }, 00:14:48.300 { 00:14:48.300 "name": null, 00:14:48.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.300 "is_configured": false, 00:14:48.300 "data_offset": 2048, 00:14:48.300 "data_size": 63488 00:14:48.300 }, 00:14:48.300 { 00:14:48.300 "name": "BaseBdev3", 00:14:48.300 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:48.300 "is_configured": true, 00:14:48.300 "data_offset": 2048, 00:14:48.300 "data_size": 63488 00:14:48.300 }, 00:14:48.300 { 00:14:48.300 "name": "BaseBdev4", 00:14:48.300 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:48.300 "is_configured": true, 00:14:48.300 "data_offset": 2048, 00:14:48.300 "data_size": 63488 00:14:48.300 } 00:14:48.300 ] 00:14:48.300 }' 00:14:48.300 12:45:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.300 12:45:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.558 12:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.558 12:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.558 12:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.558 [2024-11-06 12:45:37.166831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.558 [2024-11-06 12:45:37.167419] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:48.558 [2024-11-06 12:45:37.167450] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:48.558 [2024-11-06 12:45:37.167520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.558 [2024-11-06 12:45:37.181300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:48.558 12:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.558 12:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:48.558 [2024-11-06 12:45:37.183963] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.935 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.935 "name": "raid_bdev1", 00:14:49.935 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:49.935 "strip_size_kb": 0, 00:14:49.935 "state": "online", 00:14:49.935 "raid_level": "raid1", 00:14:49.935 "superblock": true, 00:14:49.935 "num_base_bdevs": 4, 00:14:49.936 "num_base_bdevs_discovered": 3, 00:14:49.936 "num_base_bdevs_operational": 3, 00:14:49.936 "process": { 00:14:49.936 "type": "rebuild", 00:14:49.936 "target": "spare", 00:14:49.936 "progress": { 00:14:49.936 "blocks": 20480, 00:14:49.936 "percent": 32 00:14:49.936 } 00:14:49.936 }, 00:14:49.936 "base_bdevs_list": [ 00:14:49.936 { 00:14:49.936 "name": "spare", 00:14:49.936 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:49.936 "is_configured": true, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 }, 00:14:49.936 { 00:14:49.936 "name": null, 00:14:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.936 "is_configured": false, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 }, 00:14:49.936 { 00:14:49.936 "name": "BaseBdev3", 00:14:49.936 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:49.936 "is_configured": true, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 }, 00:14:49.936 { 00:14:49.936 "name": "BaseBdev4", 00:14:49.936 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:49.936 "is_configured": true, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 } 00:14:49.936 ] 00:14:49.936 }' 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.936 [2024-11-06 12:45:38.353464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.936 [2024-11-06 12:45:38.393368] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:49.936 [2024-11-06 12:45:38.393620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.936 [2024-11-06 12:45:38.393775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.936 [2024-11-06 12:45:38.393800] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.936 "name": "raid_bdev1", 00:14:49.936 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:49.936 "strip_size_kb": 0, 00:14:49.936 "state": "online", 00:14:49.936 "raid_level": "raid1", 00:14:49.936 "superblock": true, 00:14:49.936 "num_base_bdevs": 4, 00:14:49.936 "num_base_bdevs_discovered": 2, 00:14:49.936 "num_base_bdevs_operational": 2, 00:14:49.936 "base_bdevs_list": [ 00:14:49.936 { 00:14:49.936 "name": null, 00:14:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.936 "is_configured": false, 00:14:49.936 "data_offset": 0, 00:14:49.936 "data_size": 63488 00:14:49.936 }, 00:14:49.936 { 00:14:49.936 "name": null, 00:14:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.936 "is_configured": false, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 }, 00:14:49.936 { 00:14:49.936 "name": "BaseBdev3", 00:14:49.936 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:49.936 "is_configured": true, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 }, 00:14:49.936 { 00:14:49.936 "name": "BaseBdev4", 00:14:49.936 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:49.936 "is_configured": true, 00:14:49.936 "data_offset": 2048, 00:14:49.936 "data_size": 63488 00:14:49.936 } 00:14:49.936 ] 00:14:49.936 }' 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.936 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.501 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:50.501 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.501 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.501 [2024-11-06 12:45:38.931034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:50.501 [2024-11-06 12:45:38.931128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.501 [2024-11-06 12:45:38.931169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:50.501 [2024-11-06 12:45:38.931185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.501 [2024-11-06 12:45:38.931844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.501 [2024-11-06 12:45:38.931878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:50.501 [2024-11-06 12:45:38.932044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:50.501 [2024-11-06 12:45:38.932066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.501 [2024-11-06 12:45:38.932085] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.501 [2024-11-06 12:45:38.932117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.501 [2024-11-06 12:45:38.945820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:50.501 spare 00:14:50.501 12:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.501 12:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:50.501 [2024-11-06 12:45:38.948611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.433 12:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.433 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.433 "name": "raid_bdev1", 00:14:51.433 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:51.433 "strip_size_kb": 0, 00:14:51.433 "state": "online", 00:14:51.433 "raid_level": "raid1", 00:14:51.433 "superblock": true, 00:14:51.433 "num_base_bdevs": 4, 00:14:51.433 "num_base_bdevs_discovered": 3, 00:14:51.433 "num_base_bdevs_operational": 3, 00:14:51.433 "process": { 00:14:51.433 "type": "rebuild", 00:14:51.433 "target": "spare", 00:14:51.433 "progress": { 00:14:51.433 "blocks": 20480, 00:14:51.433 "percent": 32 00:14:51.433 } 00:14:51.433 }, 00:14:51.433 "base_bdevs_list": [ 00:14:51.433 { 00:14:51.433 "name": "spare", 00:14:51.433 "uuid": "a7295160-d97a-5a58-9da7-4d918d0edac9", 00:14:51.433 "is_configured": true, 00:14:51.433 "data_offset": 2048, 00:14:51.433 "data_size": 63488 00:14:51.433 }, 00:14:51.433 { 00:14:51.433 "name": null, 00:14:51.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.433 "is_configured": false, 00:14:51.433 "data_offset": 2048, 00:14:51.433 "data_size": 63488 00:14:51.433 }, 00:14:51.433 { 00:14:51.433 "name": "BaseBdev3", 00:14:51.433 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:51.433 "is_configured": true, 00:14:51.433 "data_offset": 2048, 00:14:51.433 "data_size": 63488 00:14:51.433 }, 00:14:51.433 { 00:14:51.433 "name": "BaseBdev4", 00:14:51.433 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:51.433 "is_configured": true, 00:14:51.433 "data_offset": 2048, 00:14:51.433 "data_size": 63488 00:14:51.433 } 00:14:51.433 ] 00:14:51.433 }' 00:14:51.433 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.433 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.433 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.692 [2024-11-06 12:45:40.114533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.692 [2024-11-06 12:45:40.158065] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.692 [2024-11-06 12:45:40.158176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.692 [2024-11-06 12:45:40.158225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.692 [2024-11-06 12:45:40.158242] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.692 "name": "raid_bdev1", 00:14:51.692 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:51.692 "strip_size_kb": 0, 00:14:51.692 "state": "online", 00:14:51.692 "raid_level": "raid1", 00:14:51.692 "superblock": true, 00:14:51.692 "num_base_bdevs": 4, 00:14:51.692 "num_base_bdevs_discovered": 2, 00:14:51.692 "num_base_bdevs_operational": 2, 00:14:51.692 "base_bdevs_list": [ 00:14:51.692 { 00:14:51.692 "name": null, 00:14:51.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.692 "is_configured": false, 00:14:51.692 "data_offset": 0, 00:14:51.692 "data_size": 63488 00:14:51.692 }, 00:14:51.692 { 00:14:51.692 "name": null, 00:14:51.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.692 "is_configured": false, 00:14:51.692 "data_offset": 2048, 00:14:51.692 "data_size": 63488 00:14:51.692 }, 00:14:51.692 { 00:14:51.692 "name": "BaseBdev3", 00:14:51.692 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:51.692 "is_configured": true, 00:14:51.692 "data_offset": 2048, 00:14:51.692 "data_size": 63488 00:14:51.692 }, 00:14:51.692 { 00:14:51.692 "name": "BaseBdev4", 00:14:51.692 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:51.692 "is_configured": true, 00:14:51.692 "data_offset": 2048, 00:14:51.692 "data_size": 63488 00:14:51.692 } 00:14:51.692 ] 00:14:51.692 }' 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.692 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.303 "name": "raid_bdev1", 00:14:52.303 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:52.303 "strip_size_kb": 0, 00:14:52.303 "state": "online", 00:14:52.303 "raid_level": "raid1", 00:14:52.303 "superblock": true, 00:14:52.303 "num_base_bdevs": 4, 00:14:52.303 "num_base_bdevs_discovered": 2, 00:14:52.303 "num_base_bdevs_operational": 2, 00:14:52.303 "base_bdevs_list": [ 00:14:52.303 { 00:14:52.303 "name": null, 00:14:52.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.303 "is_configured": false, 00:14:52.303 "data_offset": 0, 00:14:52.303 "data_size": 63488 00:14:52.303 }, 00:14:52.303 { 00:14:52.303 "name": null, 00:14:52.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.303 "is_configured": false, 00:14:52.303 "data_offset": 2048, 00:14:52.303 "data_size": 63488 00:14:52.303 }, 00:14:52.303 { 00:14:52.303 "name": "BaseBdev3", 00:14:52.303 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:52.303 "is_configured": true, 00:14:52.303 "data_offset": 2048, 00:14:52.303 "data_size": 63488 00:14:52.303 }, 00:14:52.303 { 00:14:52.303 "name": "BaseBdev4", 00:14:52.303 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:52.303 "is_configured": true, 00:14:52.303 "data_offset": 2048, 00:14:52.303 "data_size": 63488 00:14:52.303 } 00:14:52.303 ] 00:14:52.303 }' 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.303 [2024-11-06 12:45:40.883272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.303 [2024-11-06 12:45:40.883593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.303 [2024-11-06 12:45:40.883632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:52.303 [2024-11-06 12:45:40.883659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.303 [2024-11-06 12:45:40.884314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.303 [2024-11-06 12:45:40.884344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.303 [2024-11-06 12:45:40.884438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:52.303 [2024-11-06 12:45:40.884463] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:52.303 [2024-11-06 12:45:40.884474] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.303 [2024-11-06 12:45:40.884504] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:52.303 BaseBdev1 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.303 12:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.679 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.679 "name": "raid_bdev1", 00:14:53.679 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:53.679 "strip_size_kb": 0, 00:14:53.679 "state": "online", 00:14:53.679 "raid_level": "raid1", 00:14:53.679 "superblock": true, 00:14:53.679 "num_base_bdevs": 4, 00:14:53.679 "num_base_bdevs_discovered": 2, 00:14:53.679 "num_base_bdevs_operational": 2, 00:14:53.679 "base_bdevs_list": [ 00:14:53.679 { 00:14:53.679 "name": null, 00:14:53.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.679 "is_configured": false, 00:14:53.679 "data_offset": 0, 00:14:53.679 "data_size": 63488 00:14:53.679 }, 00:14:53.679 { 00:14:53.679 "name": null, 00:14:53.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.679 "is_configured": false, 00:14:53.679 "data_offset": 2048, 00:14:53.679 "data_size": 63488 00:14:53.679 }, 00:14:53.679 { 00:14:53.679 "name": "BaseBdev3", 00:14:53.679 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:53.679 "is_configured": true, 00:14:53.680 "data_offset": 2048, 00:14:53.680 "data_size": 63488 00:14:53.680 }, 00:14:53.680 { 00:14:53.680 "name": "BaseBdev4", 00:14:53.680 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:53.680 "is_configured": true, 00:14:53.680 "data_offset": 2048, 00:14:53.680 "data_size": 63488 00:14:53.680 } 00:14:53.680 ] 00:14:53.680 }' 00:14:53.680 12:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.680 12:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.938 "name": "raid_bdev1", 00:14:53.938 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:53.938 "strip_size_kb": 0, 00:14:53.938 "state": "online", 00:14:53.938 "raid_level": "raid1", 00:14:53.938 "superblock": true, 00:14:53.938 "num_base_bdevs": 4, 00:14:53.938 "num_base_bdevs_discovered": 2, 00:14:53.938 "num_base_bdevs_operational": 2, 00:14:53.938 "base_bdevs_list": [ 00:14:53.938 { 00:14:53.938 "name": null, 00:14:53.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.938 "is_configured": false, 00:14:53.938 "data_offset": 0, 00:14:53.938 "data_size": 63488 00:14:53.938 }, 00:14:53.938 { 00:14:53.938 "name": null, 00:14:53.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.938 "is_configured": false, 00:14:53.938 "data_offset": 2048, 00:14:53.938 "data_size": 63488 00:14:53.938 }, 00:14:53.938 { 00:14:53.938 "name": "BaseBdev3", 00:14:53.938 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:53.938 "is_configured": true, 00:14:53.938 "data_offset": 2048, 00:14:53.938 "data_size": 63488 00:14:53.938 }, 00:14:53.938 { 00:14:53.938 "name": "BaseBdev4", 00:14:53.938 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:53.938 "is_configured": true, 00:14:53.938 "data_offset": 2048, 00:14:53.938 "data_size": 63488 00:14:53.938 } 00:14:53.938 ] 00:14:53.938 }' 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.938 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.196 [2024-11-06 12:45:42.651996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.196 [2024-11-06 12:45:42.652507] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:54.196 [2024-11-06 12:45:42.652540] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:54.196 request: 00:14:54.196 { 00:14:54.196 "base_bdev": "BaseBdev1", 00:14:54.196 "raid_bdev": "raid_bdev1", 00:14:54.196 "method": "bdev_raid_add_base_bdev", 00:14:54.196 "req_id": 1 00:14:54.196 } 00:14:54.196 Got JSON-RPC error response 00:14:54.196 response: 00:14:54.196 { 00:14:54.196 "code": -22, 00:14:54.196 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:54.196 } 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:54.196 12:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.132 "name": "raid_bdev1", 00:14:55.132 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:55.132 "strip_size_kb": 0, 00:14:55.132 "state": "online", 00:14:55.132 "raid_level": "raid1", 00:14:55.132 "superblock": true, 00:14:55.132 "num_base_bdevs": 4, 00:14:55.132 "num_base_bdevs_discovered": 2, 00:14:55.132 "num_base_bdevs_operational": 2, 00:14:55.132 "base_bdevs_list": [ 00:14:55.132 { 00:14:55.132 "name": null, 00:14:55.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.132 "is_configured": false, 00:14:55.132 "data_offset": 0, 00:14:55.132 "data_size": 63488 00:14:55.132 }, 00:14:55.132 { 00:14:55.132 "name": null, 00:14:55.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.132 "is_configured": false, 00:14:55.132 "data_offset": 2048, 00:14:55.132 "data_size": 63488 00:14:55.132 }, 00:14:55.132 { 00:14:55.132 "name": "BaseBdev3", 00:14:55.132 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:55.132 "is_configured": true, 00:14:55.132 "data_offset": 2048, 00:14:55.132 "data_size": 63488 00:14:55.132 }, 00:14:55.132 { 00:14:55.132 "name": "BaseBdev4", 00:14:55.132 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:55.132 "is_configured": true, 00:14:55.132 "data_offset": 2048, 00:14:55.132 "data_size": 63488 00:14:55.132 } 00:14:55.132 ] 00:14:55.132 }' 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.132 12:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.755 "name": "raid_bdev1", 00:14:55.755 "uuid": "241b939c-acca-4a5d-bbfd-49fd63aaf52e", 00:14:55.755 "strip_size_kb": 0, 00:14:55.755 "state": "online", 00:14:55.755 "raid_level": "raid1", 00:14:55.755 "superblock": true, 00:14:55.755 "num_base_bdevs": 4, 00:14:55.755 "num_base_bdevs_discovered": 2, 00:14:55.755 "num_base_bdevs_operational": 2, 00:14:55.755 "base_bdevs_list": [ 00:14:55.755 { 00:14:55.755 "name": null, 00:14:55.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.755 "is_configured": false, 00:14:55.755 "data_offset": 0, 00:14:55.755 "data_size": 63488 00:14:55.755 }, 00:14:55.755 { 00:14:55.755 "name": null, 00:14:55.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.755 "is_configured": false, 00:14:55.755 "data_offset": 2048, 00:14:55.755 "data_size": 63488 00:14:55.755 }, 00:14:55.755 { 00:14:55.755 "name": "BaseBdev3", 00:14:55.755 "uuid": "00b5c0a5-d8e3-57b0-8523-7590fc4488bf", 00:14:55.755 "is_configured": true, 00:14:55.755 "data_offset": 2048, 00:14:55.755 "data_size": 63488 00:14:55.755 }, 00:14:55.755 { 00:14:55.755 "name": "BaseBdev4", 00:14:55.755 "uuid": "cfc7acaf-8659-5590-9685-f11d4de3f409", 00:14:55.755 "is_configured": true, 00:14:55.755 "data_offset": 2048, 00:14:55.755 "data_size": 63488 00:14:55.755 } 00:14:55.755 ] 00:14:55.755 }' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78301 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78301 ']' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78301 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78301 00:14:55.755 killing process with pid 78301 00:14:55.755 Received shutdown signal, test time was about 60.000000 seconds 00:14:55.755 00:14:55.755 Latency(us) 00:14:55.755 [2024-11-06T12:45:44.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.755 [2024-11-06T12:45:44.412Z] =================================================================================================================== 00:14:55.755 [2024-11-06T12:45:44.412Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78301' 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78301 00:14:55.755 12:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78301 00:14:55.755 [2024-11-06 12:45:44.388965] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.755 [2024-11-06 12:45:44.389127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.755 [2024-11-06 12:45:44.389291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.755 [2024-11-06 12:45:44.389317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:56.323 [2024-11-06 12:45:44.837107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.256 12:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:57.256 00:14:57.256 real 0m29.886s 00:14:57.256 user 0m35.928s 00:14:57.256 sys 0m4.410s 00:14:57.256 12:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.256 ************************************ 00:14:57.256 END TEST raid_rebuild_test_sb 00:14:57.256 ************************************ 00:14:57.256 12:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.514 12:45:45 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:57.514 12:45:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:57.514 12:45:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:57.514 12:45:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.514 ************************************ 00:14:57.514 START TEST raid_rebuild_test_io 00:14:57.514 ************************************ 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79142 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79142 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79142 ']' 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:57.514 12:45:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.514 [2024-11-06 12:45:46.082646] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:14:57.514 [2024-11-06 12:45:46.083072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79142 ] 00:14:57.514 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.514 Zero copy mechanism will not be used. 00:14:57.772 [2024-11-06 12:45:46.269541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.772 [2024-11-06 12:45:46.404085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.030 [2024-11-06 12:45:46.612513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.030 [2024-11-06 12:45:46.612589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.596 BaseBdev1_malloc 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.596 [2024-11-06 12:45:47.143876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.596 [2024-11-06 12:45:47.144117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.596 [2024-11-06 12:45:47.144294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:58.596 [2024-11-06 12:45:47.144329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.596 [2024-11-06 12:45:47.147148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.596 BaseBdev1 00:14:58.596 [2024-11-06 12:45:47.147349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.596 BaseBdev2_malloc 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.596 [2024-11-06 12:45:47.196218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:58.596 [2024-11-06 12:45:47.196446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.596 [2024-11-06 12:45:47.196522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:58.596 [2024-11-06 12:45:47.196773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.596 [2024-11-06 12:45:47.199631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.596 [2024-11-06 12:45:47.199685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.596 BaseBdev2 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.596 BaseBdev3_malloc 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.596 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:58.854 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.854 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 [2024-11-06 12:45:47.257932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:58.854 [2024-11-06 12:45:47.258141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.854 [2024-11-06 12:45:47.258235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:58.854 [2024-11-06 12:45:47.258454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.855 [2024-11-06 12:45:47.261218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.855 [2024-11-06 12:45:47.261270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.855 BaseBdev3 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 BaseBdev4_malloc 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 [2024-11-06 12:45:47.309956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:58.855 [2024-11-06 12:45:47.310155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.855 [2024-11-06 12:45:47.310205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:58.855 [2024-11-06 12:45:47.310227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.855 [2024-11-06 12:45:47.312912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.855 [2024-11-06 12:45:47.313082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:58.855 BaseBdev4 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 spare_malloc 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 spare_delay 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 [2024-11-06 12:45:47.369795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.855 [2024-11-06 12:45:47.369870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.855 [2024-11-06 12:45:47.369900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:58.855 [2024-11-06 12:45:47.369924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.855 [2024-11-06 12:45:47.372662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.855 [2024-11-06 12:45:47.372715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.855 spare 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 [2024-11-06 12:45:47.377843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.855 [2024-11-06 12:45:47.380391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.855 [2024-11-06 12:45:47.380490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.855 [2024-11-06 12:45:47.380577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:58.855 [2024-11-06 12:45:47.380694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:58.855 [2024-11-06 12:45:47.380718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:58.855 [2024-11-06 12:45:47.381041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:58.855 [2024-11-06 12:45:47.381284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:58.855 [2024-11-06 12:45:47.381306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:58.855 [2024-11-06 12:45:47.381495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.855 "name": "raid_bdev1", 00:14:58.855 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:14:58.855 "strip_size_kb": 0, 00:14:58.855 "state": "online", 00:14:58.855 "raid_level": "raid1", 00:14:58.855 "superblock": false, 00:14:58.855 "num_base_bdevs": 4, 00:14:58.855 "num_base_bdevs_discovered": 4, 00:14:58.855 "num_base_bdevs_operational": 4, 00:14:58.855 "base_bdevs_list": [ 00:14:58.855 { 00:14:58.855 "name": "BaseBdev1", 00:14:58.855 "uuid": "44a1ce18-86c1-5cfc-9c37-b38fe292b48a", 00:14:58.855 "is_configured": true, 00:14:58.855 "data_offset": 0, 00:14:58.855 "data_size": 65536 00:14:58.855 }, 00:14:58.855 { 00:14:58.855 "name": "BaseBdev2", 00:14:58.855 "uuid": "0c046739-813a-5560-9e9a-bbb681e3de33", 00:14:58.855 "is_configured": true, 00:14:58.855 "data_offset": 0, 00:14:58.855 "data_size": 65536 00:14:58.855 }, 00:14:58.855 { 00:14:58.855 "name": "BaseBdev3", 00:14:58.855 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:14:58.855 "is_configured": true, 00:14:58.855 "data_offset": 0, 00:14:58.855 "data_size": 65536 00:14:58.855 }, 00:14:58.855 { 00:14:58.855 "name": "BaseBdev4", 00:14:58.855 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:14:58.855 "is_configured": true, 00:14:58.855 "data_offset": 0, 00:14:58.855 "data_size": 65536 00:14:58.855 } 00:14:58.855 ] 00:14:58.855 }' 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.855 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:59.450 [2024-11-06 12:45:47.894431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.450 12:45:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.450 [2024-11-06 12:45:47.997982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.450 "name": "raid_bdev1", 00:14:59.450 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:14:59.450 "strip_size_kb": 0, 00:14:59.450 "state": "online", 00:14:59.450 "raid_level": "raid1", 00:14:59.450 "superblock": false, 00:14:59.450 "num_base_bdevs": 4, 00:14:59.450 "num_base_bdevs_discovered": 3, 00:14:59.450 "num_base_bdevs_operational": 3, 00:14:59.450 "base_bdevs_list": [ 00:14:59.450 { 00:14:59.450 "name": null, 00:14:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.450 "is_configured": false, 00:14:59.450 "data_offset": 0, 00:14:59.450 "data_size": 65536 00:14:59.450 }, 00:14:59.450 { 00:14:59.450 "name": "BaseBdev2", 00:14:59.450 "uuid": "0c046739-813a-5560-9e9a-bbb681e3de33", 00:14:59.450 "is_configured": true, 00:14:59.450 "data_offset": 0, 00:14:59.450 "data_size": 65536 00:14:59.450 }, 00:14:59.450 { 00:14:59.450 "name": "BaseBdev3", 00:14:59.450 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:14:59.450 "is_configured": true, 00:14:59.450 "data_offset": 0, 00:14:59.450 "data_size": 65536 00:14:59.450 }, 00:14:59.450 { 00:14:59.450 "name": "BaseBdev4", 00:14:59.450 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:14:59.450 "is_configured": true, 00:14:59.450 "data_offset": 0, 00:14:59.450 "data_size": 65536 00:14:59.450 } 00:14:59.450 ] 00:14:59.450 }' 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.450 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.708 [2024-11-06 12:45:48.130614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:59.708 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.708 Zero copy mechanism will not be used. 00:14:59.708 Running I/O for 60 seconds... 00:14:59.966 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.966 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.966 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.966 [2024-11-06 12:45:48.525398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.966 12:45:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.966 12:45:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.966 [2024-11-06 12:45:48.591182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:59.966 [2024-11-06 12:45:48.593887] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.224 [2024-11-06 12:45:48.715164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.224 [2024-11-06 12:45:48.715903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.224 [2024-11-06 12:45:48.848600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.224 [2024-11-06 12:45:48.849037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.740 139.00 IOPS, 417.00 MiB/s [2024-11-06T12:45:49.397Z] [2024-11-06 12:45:49.206714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.740 [2024-11-06 12:45:49.208522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.998 [2024-11-06 12:45:49.441648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.998 [2024-11-06 12:45:49.442595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.998 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.998 "name": "raid_bdev1", 00:15:00.998 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:00.998 "strip_size_kb": 0, 00:15:00.998 "state": "online", 00:15:00.998 "raid_level": "raid1", 00:15:00.998 "superblock": false, 00:15:00.998 "num_base_bdevs": 4, 00:15:00.998 "num_base_bdevs_discovered": 4, 00:15:00.998 "num_base_bdevs_operational": 4, 00:15:00.998 "process": { 00:15:00.998 "type": "rebuild", 00:15:00.998 "target": "spare", 00:15:00.998 "progress": { 00:15:00.998 "blocks": 10240, 00:15:00.998 "percent": 15 00:15:00.998 } 00:15:00.998 }, 00:15:00.998 "base_bdevs_list": [ 00:15:00.998 { 00:15:00.998 "name": "spare", 00:15:00.998 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:00.998 "is_configured": true, 00:15:00.998 "data_offset": 0, 00:15:00.998 "data_size": 65536 00:15:00.999 }, 00:15:00.999 { 00:15:00.999 "name": "BaseBdev2", 00:15:00.999 "uuid": "0c046739-813a-5560-9e9a-bbb681e3de33", 00:15:00.999 "is_configured": true, 00:15:00.999 "data_offset": 0, 00:15:00.999 "data_size": 65536 00:15:00.999 }, 00:15:00.999 { 00:15:00.999 "name": "BaseBdev3", 00:15:00.999 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:00.999 "is_configured": true, 00:15:00.999 "data_offset": 0, 00:15:00.999 "data_size": 65536 00:15:00.999 }, 00:15:00.999 { 00:15:00.999 "name": "BaseBdev4", 00:15:00.999 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:00.999 "is_configured": true, 00:15:00.999 "data_offset": 0, 00:15:00.999 "data_size": 65536 00:15:00.999 } 00:15:00.999 ] 00:15:00.999 }' 00:15:00.999 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.257 [2024-11-06 12:45:49.743890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.257 [2024-11-06 12:45:49.801435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:01.257 [2024-11-06 12:45:49.802075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:01.257 [2024-11-06 12:45:49.810786] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.257 [2024-11-06 12:45:49.833226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.257 [2024-11-06 12:45:49.833294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.257 [2024-11-06 12:45:49.833316] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.257 [2024-11-06 12:45:49.866085] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.257 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.515 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.515 "name": "raid_bdev1", 00:15:01.515 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:01.515 "strip_size_kb": 0, 00:15:01.515 "state": "online", 00:15:01.515 "raid_level": "raid1", 00:15:01.515 "superblock": false, 00:15:01.515 "num_base_bdevs": 4, 00:15:01.515 "num_base_bdevs_discovered": 3, 00:15:01.515 "num_base_bdevs_operational": 3, 00:15:01.515 "base_bdevs_list": [ 00:15:01.515 { 00:15:01.515 "name": null, 00:15:01.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.515 "is_configured": false, 00:15:01.515 "data_offset": 0, 00:15:01.515 "data_size": 65536 00:15:01.515 }, 00:15:01.515 { 00:15:01.515 "name": "BaseBdev2", 00:15:01.515 "uuid": "0c046739-813a-5560-9e9a-bbb681e3de33", 00:15:01.515 "is_configured": true, 00:15:01.515 "data_offset": 0, 00:15:01.515 "data_size": 65536 00:15:01.515 }, 00:15:01.515 { 00:15:01.515 "name": "BaseBdev3", 00:15:01.515 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:01.515 "is_configured": true, 00:15:01.515 "data_offset": 0, 00:15:01.515 "data_size": 65536 00:15:01.515 }, 00:15:01.515 { 00:15:01.515 "name": "BaseBdev4", 00:15:01.515 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:01.515 "is_configured": true, 00:15:01.515 "data_offset": 0, 00:15:01.515 "data_size": 65536 00:15:01.515 } 00:15:01.515 ] 00:15:01.515 }' 00:15:01.515 12:45:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.515 12:45:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.773 127.50 IOPS, 382.50 MiB/s [2024-11-06T12:45:50.430Z] 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.773 12:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.031 "name": "raid_bdev1", 00:15:02.031 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:02.031 "strip_size_kb": 0, 00:15:02.031 "state": "online", 00:15:02.031 "raid_level": "raid1", 00:15:02.031 "superblock": false, 00:15:02.031 "num_base_bdevs": 4, 00:15:02.031 "num_base_bdevs_discovered": 3, 00:15:02.031 "num_base_bdevs_operational": 3, 00:15:02.031 "base_bdevs_list": [ 00:15:02.031 { 00:15:02.031 "name": null, 00:15:02.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.031 "is_configured": false, 00:15:02.031 "data_offset": 0, 00:15:02.031 "data_size": 65536 00:15:02.031 }, 00:15:02.031 { 00:15:02.031 "name": "BaseBdev2", 00:15:02.031 "uuid": "0c046739-813a-5560-9e9a-bbb681e3de33", 00:15:02.031 "is_configured": true, 00:15:02.031 "data_offset": 0, 00:15:02.031 "data_size": 65536 00:15:02.031 }, 00:15:02.031 { 00:15:02.031 "name": "BaseBdev3", 00:15:02.031 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:02.031 "is_configured": true, 00:15:02.031 "data_offset": 0, 00:15:02.031 "data_size": 65536 00:15:02.031 }, 00:15:02.031 { 00:15:02.031 "name": "BaseBdev4", 00:15:02.031 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:02.031 "is_configured": true, 00:15:02.031 "data_offset": 0, 00:15:02.031 "data_size": 65536 00:15:02.031 } 00:15:02.031 ] 00:15:02.031 }' 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.031 [2024-11-06 12:45:50.578334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.031 12:45:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:02.031 [2024-11-06 12:45:50.624484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:02.031 [2024-11-06 12:45:50.627272] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.289 [2024-11-06 12:45:50.782905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.547 [2024-11-06 12:45:51.026133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.547 [2024-11-06 12:45:51.026708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.805 134.67 IOPS, 404.00 MiB/s [2024-11-06T12:45:51.462Z] [2024-11-06 12:45:51.302144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:02.805 [2024-11-06 12:45:51.425797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.805 [2024-11-06 12:45:51.427001] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.063 "name": "raid_bdev1", 00:15:03.063 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:03.063 "strip_size_kb": 0, 00:15:03.063 "state": "online", 00:15:03.063 "raid_level": "raid1", 00:15:03.063 "superblock": false, 00:15:03.063 "num_base_bdevs": 4, 00:15:03.063 "num_base_bdevs_discovered": 4, 00:15:03.063 "num_base_bdevs_operational": 4, 00:15:03.063 "process": { 00:15:03.063 "type": "rebuild", 00:15:03.063 "target": "spare", 00:15:03.063 "progress": { 00:15:03.063 "blocks": 10240, 00:15:03.063 "percent": 15 00:15:03.063 } 00:15:03.063 }, 00:15:03.063 "base_bdevs_list": [ 00:15:03.063 { 00:15:03.063 "name": "spare", 00:15:03.063 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:03.063 "is_configured": true, 00:15:03.063 "data_offset": 0, 00:15:03.063 "data_size": 65536 00:15:03.063 }, 00:15:03.063 { 00:15:03.063 "name": "BaseBdev2", 00:15:03.063 "uuid": "0c046739-813a-5560-9e9a-bbb681e3de33", 00:15:03.063 "is_configured": true, 00:15:03.063 "data_offset": 0, 00:15:03.063 "data_size": 65536 00:15:03.063 }, 00:15:03.063 { 00:15:03.063 "name": "BaseBdev3", 00:15:03.063 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:03.063 "is_configured": true, 00:15:03.063 "data_offset": 0, 00:15:03.063 "data_size": 65536 00:15:03.063 }, 00:15:03.063 { 00:15:03.063 "name": "BaseBdev4", 00:15:03.063 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:03.063 "is_configured": true, 00:15:03.063 "data_offset": 0, 00:15:03.063 "data_size": 65536 00:15:03.063 } 00:15:03.063 ] 00:15:03.063 }' 00:15:03.063 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.322 [2024-11-06 12:45:51.780053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.322 [2024-11-06 12:45:51.803242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:03.322 [2024-11-06 12:45:51.911223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:03.322 [2024-11-06 12:45:51.911297] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.322 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.323 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.323 "name": "raid_bdev1", 00:15:03.323 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:03.323 "strip_size_kb": 0, 00:15:03.323 "state": "online", 00:15:03.323 "raid_level": "raid1", 00:15:03.323 "superblock": false, 00:15:03.323 "num_base_bdevs": 4, 00:15:03.323 "num_base_bdevs_discovered": 3, 00:15:03.323 "num_base_bdevs_operational": 3, 00:15:03.323 "process": { 00:15:03.323 "type": "rebuild", 00:15:03.323 "target": "spare", 00:15:03.323 "progress": { 00:15:03.323 "blocks": 14336, 00:15:03.323 "percent": 21 00:15:03.323 } 00:15:03.323 }, 00:15:03.323 "base_bdevs_list": [ 00:15:03.323 { 00:15:03.323 "name": "spare", 00:15:03.323 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:03.323 "is_configured": true, 00:15:03.323 "data_offset": 0, 00:15:03.323 "data_size": 65536 00:15:03.323 }, 00:15:03.323 { 00:15:03.323 "name": null, 00:15:03.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.323 "is_configured": false, 00:15:03.323 "data_offset": 0, 00:15:03.323 "data_size": 65536 00:15:03.323 }, 00:15:03.323 { 00:15:03.323 "name": "BaseBdev3", 00:15:03.323 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:03.323 "is_configured": true, 00:15:03.323 "data_offset": 0, 00:15:03.323 "data_size": 65536 00:15:03.323 }, 00:15:03.323 { 00:15:03.323 "name": "BaseBdev4", 00:15:03.323 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:03.323 "is_configured": true, 00:15:03.323 "data_offset": 0, 00:15:03.323 "data_size": 65536 00:15:03.323 } 00:15:03.323 ] 00:15:03.323 }' 00:15:03.581 12:45:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.581 "name": "raid_bdev1", 00:15:03.581 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:03.581 "strip_size_kb": 0, 00:15:03.581 "state": "online", 00:15:03.581 "raid_level": "raid1", 00:15:03.581 "superblock": false, 00:15:03.581 "num_base_bdevs": 4, 00:15:03.581 "num_base_bdevs_discovered": 3, 00:15:03.581 "num_base_bdevs_operational": 3, 00:15:03.581 "process": { 00:15:03.581 "type": "rebuild", 00:15:03.581 "target": "spare", 00:15:03.581 "progress": { 00:15:03.581 "blocks": 16384, 00:15:03.581 "percent": 25 00:15:03.581 } 00:15:03.581 }, 00:15:03.581 "base_bdevs_list": [ 00:15:03.581 { 00:15:03.581 "name": "spare", 00:15:03.581 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:03.581 "is_configured": true, 00:15:03.581 "data_offset": 0, 00:15:03.581 "data_size": 65536 00:15:03.581 }, 00:15:03.581 { 00:15:03.581 "name": null, 00:15:03.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.581 "is_configured": false, 00:15:03.581 "data_offset": 0, 00:15:03.581 "data_size": 65536 00:15:03.581 }, 00:15:03.581 { 00:15:03.581 "name": "BaseBdev3", 00:15:03.581 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:03.581 "is_configured": true, 00:15:03.581 "data_offset": 0, 00:15:03.581 "data_size": 65536 00:15:03.581 }, 00:15:03.581 { 00:15:03.581 "name": "BaseBdev4", 00:15:03.581 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:03.581 "is_configured": true, 00:15:03.581 "data_offset": 0, 00:15:03.581 "data_size": 65536 00:15:03.581 } 00:15:03.581 ] 00:15:03.581 }' 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.581 120.50 IOPS, 361.50 MiB/s [2024-11-06T12:45:52.238Z] 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.581 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.839 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.839 12:45:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.839 [2024-11-06 12:45:52.396106] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:03.839 [2024-11-06 12:45:52.396524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:04.097 [2024-11-06 12:45:52.724624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:04.362 [2024-11-06 12:45:52.971405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:04.633 106.00 IOPS, 318.00 MiB/s [2024-11-06T12:45:53.290Z] 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.633 12:45:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.891 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.891 "name": "raid_bdev1", 00:15:04.891 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:04.891 "strip_size_kb": 0, 00:15:04.891 "state": "online", 00:15:04.891 "raid_level": "raid1", 00:15:04.891 "superblock": false, 00:15:04.891 "num_base_bdevs": 4, 00:15:04.891 "num_base_bdevs_discovered": 3, 00:15:04.891 "num_base_bdevs_operational": 3, 00:15:04.891 "process": { 00:15:04.891 "type": "rebuild", 00:15:04.891 "target": "spare", 00:15:04.891 "progress": { 00:15:04.891 "blocks": 30720, 00:15:04.891 "percent": 46 00:15:04.891 } 00:15:04.891 }, 00:15:04.891 "base_bdevs_list": [ 00:15:04.891 { 00:15:04.891 "name": "spare", 00:15:04.891 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:04.891 "is_configured": true, 00:15:04.891 "data_offset": 0, 00:15:04.891 "data_size": 65536 00:15:04.891 }, 00:15:04.891 { 00:15:04.891 "name": null, 00:15:04.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.891 "is_configured": false, 00:15:04.891 "data_offset": 0, 00:15:04.891 "data_size": 65536 00:15:04.891 }, 00:15:04.891 { 00:15:04.891 "name": "BaseBdev3", 00:15:04.891 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:04.891 "is_configured": true, 00:15:04.891 "data_offset": 0, 00:15:04.891 "data_size": 65536 00:15:04.891 }, 00:15:04.891 { 00:15:04.891 "name": "BaseBdev4", 00:15:04.891 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:04.891 "is_configured": true, 00:15:04.891 "data_offset": 0, 00:15:04.891 "data_size": 65536 00:15:04.891 } 00:15:04.891 ] 00:15:04.891 }' 00:15:04.891 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.891 [2024-11-06 12:45:53.335958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:04.891 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.891 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.891 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.891 12:45:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.150 [2024-11-06 12:45:53.789079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:05.408 [2024-11-06 12:45:53.898942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:05.666 95.17 IOPS, 285.50 MiB/s [2024-11-06T12:45:54.323Z] [2024-11-06 12:45:54.232691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.924 "name": "raid_bdev1", 00:15:05.924 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:05.924 "strip_size_kb": 0, 00:15:05.924 "state": "online", 00:15:05.924 "raid_level": "raid1", 00:15:05.924 "superblock": false, 00:15:05.924 "num_base_bdevs": 4, 00:15:05.924 "num_base_bdevs_discovered": 3, 00:15:05.924 "num_base_bdevs_operational": 3, 00:15:05.924 "process": { 00:15:05.924 "type": "rebuild", 00:15:05.924 "target": "spare", 00:15:05.924 "progress": { 00:15:05.924 "blocks": 45056, 00:15:05.924 "percent": 68 00:15:05.924 } 00:15:05.924 }, 00:15:05.924 "base_bdevs_list": [ 00:15:05.924 { 00:15:05.924 "name": "spare", 00:15:05.924 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:05.924 "is_configured": true, 00:15:05.924 "data_offset": 0, 00:15:05.924 "data_size": 65536 00:15:05.924 }, 00:15:05.924 { 00:15:05.924 "name": null, 00:15:05.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.924 "is_configured": false, 00:15:05.924 "data_offset": 0, 00:15:05.924 "data_size": 65536 00:15:05.924 }, 00:15:05.924 { 00:15:05.924 "name": "BaseBdev3", 00:15:05.924 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:05.924 "is_configured": true, 00:15:05.924 "data_offset": 0, 00:15:05.924 "data_size": 65536 00:15:05.924 }, 00:15:05.924 { 00:15:05.924 "name": "BaseBdev4", 00:15:05.924 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:05.924 "is_configured": true, 00:15:05.924 "data_offset": 0, 00:15:05.924 "data_size": 65536 00:15:05.924 } 00:15:05.924 ] 00:15:05.924 }' 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.924 [2024-11-06 12:45:54.503993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.924 12:45:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.118 86.43 IOPS, 259.29 MiB/s [2024-11-06T12:45:55.775Z] 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.118 [2024-11-06 12:45:55.595589] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.118 "name": "raid_bdev1", 00:15:07.118 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:07.118 "strip_size_kb": 0, 00:15:07.118 "state": "online", 00:15:07.118 "raid_level": "raid1", 00:15:07.118 "superblock": false, 00:15:07.118 "num_base_bdevs": 4, 00:15:07.118 "num_base_bdevs_discovered": 3, 00:15:07.118 "num_base_bdevs_operational": 3, 00:15:07.118 "process": { 00:15:07.118 "type": "rebuild", 00:15:07.118 "target": "spare", 00:15:07.118 "progress": { 00:15:07.118 "blocks": 63488, 00:15:07.118 "percent": 96 00:15:07.118 } 00:15:07.118 }, 00:15:07.118 "base_bdevs_list": [ 00:15:07.118 { 00:15:07.118 "name": "spare", 00:15:07.118 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:07.118 "is_configured": true, 00:15:07.118 "data_offset": 0, 00:15:07.118 "data_size": 65536 00:15:07.118 }, 00:15:07.118 { 00:15:07.118 "name": null, 00:15:07.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.118 "is_configured": false, 00:15:07.118 "data_offset": 0, 00:15:07.118 "data_size": 65536 00:15:07.118 }, 00:15:07.118 { 00:15:07.118 "name": "BaseBdev3", 00:15:07.118 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:07.118 "is_configured": true, 00:15:07.118 "data_offset": 0, 00:15:07.118 "data_size": 65536 00:15:07.118 }, 00:15:07.118 { 00:15:07.118 "name": "BaseBdev4", 00:15:07.118 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:07.118 "is_configured": true, 00:15:07.118 "data_offset": 0, 00:15:07.118 "data_size": 65536 00:15:07.118 } 00:15:07.118 ] 00:15:07.118 }' 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.118 [2024-11-06 12:45:55.695576] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:07.118 [2024-11-06 12:45:55.698607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.118 12:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.251 79.50 IOPS, 238.50 MiB/s [2024-11-06T12:45:56.908Z] 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.251 "name": "raid_bdev1", 00:15:08.251 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:08.251 "strip_size_kb": 0, 00:15:08.251 "state": "online", 00:15:08.251 "raid_level": "raid1", 00:15:08.251 "superblock": false, 00:15:08.251 "num_base_bdevs": 4, 00:15:08.251 "num_base_bdevs_discovered": 3, 00:15:08.251 "num_base_bdevs_operational": 3, 00:15:08.251 "base_bdevs_list": [ 00:15:08.251 { 00:15:08.251 "name": "spare", 00:15:08.251 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:08.251 "is_configured": true, 00:15:08.251 "data_offset": 0, 00:15:08.251 "data_size": 65536 00:15:08.251 }, 00:15:08.251 { 00:15:08.251 "name": null, 00:15:08.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.251 "is_configured": false, 00:15:08.251 "data_offset": 0, 00:15:08.251 "data_size": 65536 00:15:08.251 }, 00:15:08.251 { 00:15:08.251 "name": "BaseBdev3", 00:15:08.251 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:08.251 "is_configured": true, 00:15:08.251 "data_offset": 0, 00:15:08.251 "data_size": 65536 00:15:08.251 }, 00:15:08.251 { 00:15:08.251 "name": "BaseBdev4", 00:15:08.251 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:08.251 "is_configured": true, 00:15:08.251 "data_offset": 0, 00:15:08.251 "data_size": 65536 00:15:08.251 } 00:15:08.251 ] 00:15:08.251 }' 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.251 12:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.509 12:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.509 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.509 "name": "raid_bdev1", 00:15:08.509 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:08.509 "strip_size_kb": 0, 00:15:08.509 "state": "online", 00:15:08.509 "raid_level": "raid1", 00:15:08.509 "superblock": false, 00:15:08.509 "num_base_bdevs": 4, 00:15:08.509 "num_base_bdevs_discovered": 3, 00:15:08.509 "num_base_bdevs_operational": 3, 00:15:08.509 "base_bdevs_list": [ 00:15:08.509 { 00:15:08.509 "name": "spare", 00:15:08.509 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:08.509 "is_configured": true, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 }, 00:15:08.509 { 00:15:08.509 "name": null, 00:15:08.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.509 "is_configured": false, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 }, 00:15:08.509 { 00:15:08.509 "name": "BaseBdev3", 00:15:08.509 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:08.509 "is_configured": true, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 }, 00:15:08.509 { 00:15:08.509 "name": "BaseBdev4", 00:15:08.509 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:08.509 "is_configured": true, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 } 00:15:08.509 ] 00:15:08.509 }' 00:15:08.509 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.509 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.509 12:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.509 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.509 "name": "raid_bdev1", 00:15:08.509 "uuid": "38abea21-9426-431c-b024-d36462a5619b", 00:15:08.509 "strip_size_kb": 0, 00:15:08.509 "state": "online", 00:15:08.509 "raid_level": "raid1", 00:15:08.509 "superblock": false, 00:15:08.509 "num_base_bdevs": 4, 00:15:08.509 "num_base_bdevs_discovered": 3, 00:15:08.509 "num_base_bdevs_operational": 3, 00:15:08.509 "base_bdevs_list": [ 00:15:08.509 { 00:15:08.509 "name": "spare", 00:15:08.509 "uuid": "157f7ea7-70d3-5a10-afe2-da57682de7de", 00:15:08.509 "is_configured": true, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 }, 00:15:08.509 { 00:15:08.509 "name": null, 00:15:08.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.509 "is_configured": false, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 }, 00:15:08.509 { 00:15:08.509 "name": "BaseBdev3", 00:15:08.509 "uuid": "1cf2e83e-ebb4-55fb-b099-88ef08f205b7", 00:15:08.509 "is_configured": true, 00:15:08.509 "data_offset": 0, 00:15:08.509 "data_size": 65536 00:15:08.509 }, 00:15:08.509 { 00:15:08.509 "name": "BaseBdev4", 00:15:08.509 "uuid": "38bcbda8-7e72-57ed-bb0b-9fe4f2a3d3ef", 00:15:08.509 "is_configured": true, 00:15:08.510 "data_offset": 0, 00:15:08.510 "data_size": 65536 00:15:08.510 } 00:15:08.510 ] 00:15:08.510 }' 00:15:08.510 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.510 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.076 76.33 IOPS, 229.00 MiB/s [2024-11-06T12:45:57.733Z] 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.076 [2024-11-06 12:45:57.549159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.076 [2024-11-06 12:45:57.549243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.076 00:15:09.076 Latency(us) 00:15:09.076 [2024-11-06T12:45:57.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.076 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:09.076 raid_bdev1 : 9.45 73.67 221.01 0.00 0.00 18653.16 294.17 122016.12 00:15:09.076 [2024-11-06T12:45:57.733Z] =================================================================================================================== 00:15:09.076 [2024-11-06T12:45:57.733Z] Total : 73.67 221.01 0.00 0.00 18653.16 294.17 122016.12 00:15:09.076 [2024-11-06 12:45:57.601211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.076 [2024-11-06 12:45:57.601293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.076 [2024-11-06 12:45:57.601440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.076 [2024-11-06 12:45:57.601462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.076 { 00:15:09.076 "results": [ 00:15:09.076 { 00:15:09.076 "job": "raid_bdev1", 00:15:09.076 "core_mask": "0x1", 00:15:09.076 "workload": "randrw", 00:15:09.076 "percentage": 50, 00:15:09.076 "status": "finished", 00:15:09.076 "queue_depth": 2, 00:15:09.076 "io_size": 3145728, 00:15:09.076 "runtime": 9.447695, 00:15:09.076 "iops": 73.66876259235718, 00:15:09.076 "mibps": 221.00628777707155, 00:15:09.076 "io_failed": 0, 00:15:09.076 "io_timeout": 0, 00:15:09.076 "avg_latency_us": 18653.161797283177, 00:15:09.076 "min_latency_us": 294.16727272727275, 00:15:09.076 "max_latency_us": 122016.11636363636 00:15:09.076 } 00:15:09.076 ], 00:15:09.076 "core_count": 1 00:15:09.076 } 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.076 12:45:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:09.335 /dev/nbd0 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.593 1+0 records in 00:15:09.593 1+0 records out 00:15:09.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440346 s, 9.3 MB/s 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:09.593 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.594 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:09.852 /dev/nbd1 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.852 1+0 records in 00:15:09.852 1+0 records out 00:15:09.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041808 s, 9.8 MB/s 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.852 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.111 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.370 12:45:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:10.648 /dev/nbd1 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.648 1+0 records in 00:15:10.648 1+0 records out 00:15:10.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566958 s, 7.2 MB/s 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.648 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.214 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79142 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79142 ']' 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79142 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79142 00:15:11.498 killing process with pid 79142 00:15:11.498 Received shutdown signal, test time was about 11.850919 seconds 00:15:11.498 00:15:11.498 Latency(us) 00:15:11.498 [2024-11-06T12:46:00.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.498 [2024-11-06T12:46:00.155Z] =================================================================================================================== 00:15:11.498 [2024-11-06T12:46:00.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79142' 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79142 00:15:11.498 12:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79142 00:15:11.498 [2024-11-06 12:45:59.984246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:11.757 [2024-11-06 12:46:00.367581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:13.133 00:15:13.133 real 0m15.534s 00:15:13.133 user 0m20.280s 00:15:13.133 sys 0m1.891s 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.133 ************************************ 00:15:13.133 END TEST raid_rebuild_test_io 00:15:13.133 ************************************ 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.133 12:46:01 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:13.133 12:46:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:13.133 12:46:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.133 12:46:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.133 ************************************ 00:15:13.133 START TEST raid_rebuild_test_sb_io 00:15:13.133 ************************************ 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79576 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79576 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79576 ']' 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.133 12:46:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.133 [2024-11-06 12:46:01.658765] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:15:13.133 [2024-11-06 12:46:01.658932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79576 ] 00:15:13.133 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:13.133 Zero copy mechanism will not be used. 00:15:13.392 [2024-11-06 12:46:01.842497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.392 [2024-11-06 12:46:01.999340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.650 [2024-11-06 12:46:02.206681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.650 [2024-11-06 12:46:02.206760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.217 BaseBdev1_malloc 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.217 [2024-11-06 12:46:02.789797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.217 [2024-11-06 12:46:02.789888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.217 [2024-11-06 12:46:02.789920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:14.217 [2024-11-06 12:46:02.789939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.217 [2024-11-06 12:46:02.792750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.217 [2024-11-06 12:46:02.792804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.217 BaseBdev1 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.217 BaseBdev2_malloc 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.217 [2024-11-06 12:46:02.842649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:14.217 [2024-11-06 12:46:02.842952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.217 [2024-11-06 12:46:02.842989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:14.217 [2024-11-06 12:46:02.843011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.217 [2024-11-06 12:46:02.845716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.217 [2024-11-06 12:46:02.845765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.217 BaseBdev2 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.217 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 BaseBdev3_malloc 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 [2024-11-06 12:46:02.907278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:14.476 [2024-11-06 12:46:02.907542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.476 [2024-11-06 12:46:02.907618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:14.476 [2024-11-06 12:46:02.907747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.476 [2024-11-06 12:46:02.910457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.476 [2024-11-06 12:46:02.910506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:14.476 BaseBdev3 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 BaseBdev4_malloc 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 [2024-11-06 12:46:02.963422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:14.476 [2024-11-06 12:46:02.963695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.476 [2024-11-06 12:46:02.963768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:14.476 [2024-11-06 12:46:02.963878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.476 [2024-11-06 12:46:02.966742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.476 [2024-11-06 12:46:02.966802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:14.476 BaseBdev4 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 12:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 spare_malloc 00:15:14.476 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:14.476 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 spare_delay 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.477 [2024-11-06 12:46:03.031740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.477 [2024-11-06 12:46:03.031825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.477 [2024-11-06 12:46:03.031854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:14.477 [2024-11-06 12:46:03.031872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.477 [2024-11-06 12:46:03.034694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.477 [2024-11-06 12:46:03.034755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.477 spare 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.477 [2024-11-06 12:46:03.039815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.477 [2024-11-06 12:46:03.042471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.477 [2024-11-06 12:46:03.042701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.477 [2024-11-06 12:46:03.042894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:14.477 [2024-11-06 12:46:03.043288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:14.477 [2024-11-06 12:46:03.043430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.477 [2024-11-06 12:46:03.043796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:14.477 [2024-11-06 12:46:03.044137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:14.477 [2024-11-06 12:46:03.044275] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:14.477 [2024-11-06 12:46:03.044639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.477 "name": "raid_bdev1", 00:15:14.477 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:14.477 "strip_size_kb": 0, 00:15:14.477 "state": "online", 00:15:14.477 "raid_level": "raid1", 00:15:14.477 "superblock": true, 00:15:14.477 "num_base_bdevs": 4, 00:15:14.477 "num_base_bdevs_discovered": 4, 00:15:14.477 "num_base_bdevs_operational": 4, 00:15:14.477 "base_bdevs_list": [ 00:15:14.477 { 00:15:14.477 "name": "BaseBdev1", 00:15:14.477 "uuid": "feeb83b4-646c-5f8e-b04e-4c89b1ee85ac", 00:15:14.477 "is_configured": true, 00:15:14.477 "data_offset": 2048, 00:15:14.477 "data_size": 63488 00:15:14.477 }, 00:15:14.477 { 00:15:14.477 "name": "BaseBdev2", 00:15:14.477 "uuid": "3228c125-4ae8-539c-ad24-18c901d91797", 00:15:14.477 "is_configured": true, 00:15:14.477 "data_offset": 2048, 00:15:14.477 "data_size": 63488 00:15:14.477 }, 00:15:14.477 { 00:15:14.477 "name": "BaseBdev3", 00:15:14.477 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:14.477 "is_configured": true, 00:15:14.477 "data_offset": 2048, 00:15:14.477 "data_size": 63488 00:15:14.477 }, 00:15:14.477 { 00:15:14.477 "name": "BaseBdev4", 00:15:14.477 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:14.477 "is_configured": true, 00:15:14.477 "data_offset": 2048, 00:15:14.477 "data_size": 63488 00:15:14.477 } 00:15:14.477 ] 00:15:14.477 }' 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.477 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:15.045 [2024-11-06 12:46:03.585303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.045 [2024-11-06 12:46:03.692746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.045 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:15.046 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.046 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.046 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.304 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.304 "name": "raid_bdev1", 00:15:15.304 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:15.304 "strip_size_kb": 0, 00:15:15.304 "state": "online", 00:15:15.304 "raid_level": "raid1", 00:15:15.304 "superblock": true, 00:15:15.304 "num_base_bdevs": 4, 00:15:15.304 "num_base_bdevs_discovered": 3, 00:15:15.304 "num_base_bdevs_operational": 3, 00:15:15.304 "base_bdevs_list": [ 00:15:15.304 { 00:15:15.304 "name": null, 00:15:15.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.304 "is_configured": false, 00:15:15.304 "data_offset": 0, 00:15:15.304 "data_size": 63488 00:15:15.305 }, 00:15:15.305 { 00:15:15.305 "name": "BaseBdev2", 00:15:15.305 "uuid": "3228c125-4ae8-539c-ad24-18c901d91797", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 }, 00:15:15.305 { 00:15:15.305 "name": "BaseBdev3", 00:15:15.305 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 }, 00:15:15.305 { 00:15:15.305 "name": "BaseBdev4", 00:15:15.305 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 } 00:15:15.305 ] 00:15:15.305 }' 00:15:15.305 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.305 12:46:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.305 [2024-11-06 12:46:03.844972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:15.305 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:15.305 Zero copy mechanism will not be used. 00:15:15.305 Running I/O for 60 seconds... 00:15:15.563 12:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.564 12:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.564 12:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.838 [2024-11-06 12:46:04.227442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.838 12:46:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.838 12:46:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:15.838 [2024-11-06 12:46:04.313914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:15.838 [2024-11-06 12:46:04.316681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.838 [2024-11-06 12:46:04.437173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:16.103 [2024-11-06 12:46:04.598901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:16.103 [2024-11-06 12:46:04.599968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:16.362 128.00 IOPS, 384.00 MiB/s [2024-11-06T12:46:05.019Z] [2024-11-06 12:46:04.966685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:16.620 [2024-11-06 12:46:05.209945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.879 "name": "raid_bdev1", 00:15:16.879 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:16.879 "strip_size_kb": 0, 00:15:16.879 "state": "online", 00:15:16.879 "raid_level": "raid1", 00:15:16.879 "superblock": true, 00:15:16.879 "num_base_bdevs": 4, 00:15:16.879 "num_base_bdevs_discovered": 4, 00:15:16.879 "num_base_bdevs_operational": 4, 00:15:16.879 "process": { 00:15:16.879 "type": "rebuild", 00:15:16.879 "target": "spare", 00:15:16.879 "progress": { 00:15:16.879 "blocks": 10240, 00:15:16.879 "percent": 16 00:15:16.879 } 00:15:16.879 }, 00:15:16.879 "base_bdevs_list": [ 00:15:16.879 { 00:15:16.879 "name": "spare", 00:15:16.879 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:16.879 "is_configured": true, 00:15:16.879 "data_offset": 2048, 00:15:16.879 "data_size": 63488 00:15:16.879 }, 00:15:16.879 { 00:15:16.879 "name": "BaseBdev2", 00:15:16.879 "uuid": "3228c125-4ae8-539c-ad24-18c901d91797", 00:15:16.879 "is_configured": true, 00:15:16.879 "data_offset": 2048, 00:15:16.879 "data_size": 63488 00:15:16.879 }, 00:15:16.879 { 00:15:16.879 "name": "BaseBdev3", 00:15:16.879 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:16.879 "is_configured": true, 00:15:16.879 "data_offset": 2048, 00:15:16.879 "data_size": 63488 00:15:16.879 }, 00:15:16.879 { 00:15:16.879 "name": "BaseBdev4", 00:15:16.879 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:16.879 "is_configured": true, 00:15:16.879 "data_offset": 2048, 00:15:16.879 "data_size": 63488 00:15:16.879 } 00:15:16.879 ] 00:15:16.879 }' 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.879 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.879 [2024-11-06 12:46:05.440366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.879 [2024-11-06 12:46:05.520361] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.136 [2024-11-06 12:46:05.542833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.136 [2024-11-06 12:46:05.542950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.136 [2024-11-06 12:46:05.542973] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.136 [2024-11-06 12:46:05.591838] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.136 "name": "raid_bdev1", 00:15:17.136 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:17.136 "strip_size_kb": 0, 00:15:17.136 "state": "online", 00:15:17.136 "raid_level": "raid1", 00:15:17.136 "superblock": true, 00:15:17.136 "num_base_bdevs": 4, 00:15:17.136 "num_base_bdevs_discovered": 3, 00:15:17.136 "num_base_bdevs_operational": 3, 00:15:17.136 "base_bdevs_list": [ 00:15:17.136 { 00:15:17.136 "name": null, 00:15:17.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.136 "is_configured": false, 00:15:17.136 "data_offset": 0, 00:15:17.136 "data_size": 63488 00:15:17.136 }, 00:15:17.136 { 00:15:17.136 "name": "BaseBdev2", 00:15:17.136 "uuid": "3228c125-4ae8-539c-ad24-18c901d91797", 00:15:17.136 "is_configured": true, 00:15:17.136 "data_offset": 2048, 00:15:17.136 "data_size": 63488 00:15:17.136 }, 00:15:17.136 { 00:15:17.136 "name": "BaseBdev3", 00:15:17.136 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:17.136 "is_configured": true, 00:15:17.136 "data_offset": 2048, 00:15:17.136 "data_size": 63488 00:15:17.136 }, 00:15:17.136 { 00:15:17.136 "name": "BaseBdev4", 00:15:17.136 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:17.136 "is_configured": true, 00:15:17.136 "data_offset": 2048, 00:15:17.136 "data_size": 63488 00:15:17.136 } 00:15:17.136 ] 00:15:17.136 }' 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.136 12:46:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.649 111.00 IOPS, 333.00 MiB/s [2024-11-06T12:46:06.306Z] 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.649 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.649 "name": "raid_bdev1", 00:15:17.649 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:17.649 "strip_size_kb": 0, 00:15:17.649 "state": "online", 00:15:17.649 "raid_level": "raid1", 00:15:17.649 "superblock": true, 00:15:17.649 "num_base_bdevs": 4, 00:15:17.649 "num_base_bdevs_discovered": 3, 00:15:17.649 "num_base_bdevs_operational": 3, 00:15:17.649 "base_bdevs_list": [ 00:15:17.649 { 00:15:17.649 "name": null, 00:15:17.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.649 "is_configured": false, 00:15:17.649 "data_offset": 0, 00:15:17.649 "data_size": 63488 00:15:17.649 }, 00:15:17.649 { 00:15:17.649 "name": "BaseBdev2", 00:15:17.649 "uuid": "3228c125-4ae8-539c-ad24-18c901d91797", 00:15:17.649 "is_configured": true, 00:15:17.649 "data_offset": 2048, 00:15:17.649 "data_size": 63488 00:15:17.649 }, 00:15:17.649 { 00:15:17.649 "name": "BaseBdev3", 00:15:17.649 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:17.649 "is_configured": true, 00:15:17.649 "data_offset": 2048, 00:15:17.649 "data_size": 63488 00:15:17.649 }, 00:15:17.649 { 00:15:17.649 "name": "BaseBdev4", 00:15:17.649 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:17.649 "is_configured": true, 00:15:17.649 "data_offset": 2048, 00:15:17.649 "data_size": 63488 00:15:17.649 } 00:15:17.649 ] 00:15:17.649 }' 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.650 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.650 [2024-11-06 12:46:06.275904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.911 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.911 12:46:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:17.911 [2024-11-06 12:46:06.351888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:17.911 [2024-11-06 12:46:06.354500] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.911 [2024-11-06 12:46:06.495030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:18.168 [2024-11-06 12:46:06.628524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:18.169 [2024-11-06 12:46:06.629143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:18.427 133.00 IOPS, 399.00 MiB/s [2024-11-06T12:46:07.084Z] [2024-11-06 12:46:06.923639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:18.427 [2024-11-06 12:46:07.054705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:18.427 [2024-11-06 12:46:07.055073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:18.684 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.684 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.684 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.684 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.684 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.942 "name": "raid_bdev1", 00:15:18.942 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:18.942 "strip_size_kb": 0, 00:15:18.942 "state": "online", 00:15:18.942 "raid_level": "raid1", 00:15:18.942 "superblock": true, 00:15:18.942 "num_base_bdevs": 4, 00:15:18.942 "num_base_bdevs_discovered": 4, 00:15:18.942 "num_base_bdevs_operational": 4, 00:15:18.942 "process": { 00:15:18.942 "type": "rebuild", 00:15:18.942 "target": "spare", 00:15:18.942 "progress": { 00:15:18.942 "blocks": 12288, 00:15:18.942 "percent": 19 00:15:18.942 } 00:15:18.942 }, 00:15:18.942 "base_bdevs_list": [ 00:15:18.942 { 00:15:18.942 "name": "spare", 00:15:18.942 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:18.942 "is_configured": true, 00:15:18.942 "data_offset": 2048, 00:15:18.942 "data_size": 63488 00:15:18.942 }, 00:15:18.942 { 00:15:18.942 "name": "BaseBdev2", 00:15:18.942 "uuid": "3228c125-4ae8-539c-ad24-18c901d91797", 00:15:18.942 "is_configured": true, 00:15:18.942 "data_offset": 2048, 00:15:18.942 "data_size": 63488 00:15:18.942 }, 00:15:18.942 { 00:15:18.942 "name": "BaseBdev3", 00:15:18.942 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:18.942 "is_configured": true, 00:15:18.942 "data_offset": 2048, 00:15:18.942 "data_size": 63488 00:15:18.942 }, 00:15:18.942 { 00:15:18.942 "name": "BaseBdev4", 00:15:18.942 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:18.942 "is_configured": true, 00:15:18.942 "data_offset": 2048, 00:15:18.942 "data_size": 63488 00:15:18.942 } 00:15:18.942 ] 00:15:18.942 }' 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.942 [2024-11-06 12:46:07.432565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:18.942 [2024-11-06 12:46:07.433185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.942 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:18.943 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.943 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.943 [2024-11-06 12:46:07.506711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.943 [2024-11-06 12:46:07.570540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:18.943 [2024-11-06 12:46:07.571397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:19.201 [2024-11-06 12:46:07.782085] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:19.201 [2024-11-06 12:46:07.782155] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.201 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.459 110.25 IOPS, 330.75 MiB/s [2024-11-06T12:46:08.116Z] 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.459 "name": "raid_bdev1", 00:15:19.459 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:19.459 "strip_size_kb": 0, 00:15:19.459 "state": "online", 00:15:19.459 "raid_level": "raid1", 00:15:19.459 "superblock": true, 00:15:19.459 "num_base_bdevs": 4, 00:15:19.459 "num_base_bdevs_discovered": 3, 00:15:19.459 "num_base_bdevs_operational": 3, 00:15:19.459 "process": { 00:15:19.459 "type": "rebuild", 00:15:19.459 "target": "spare", 00:15:19.459 "progress": { 00:15:19.459 "blocks": 16384, 00:15:19.459 "percent": 25 00:15:19.459 } 00:15:19.459 }, 00:15:19.459 "base_bdevs_list": [ 00:15:19.459 { 00:15:19.459 "name": "spare", 00:15:19.459 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:19.459 "is_configured": true, 00:15:19.459 "data_offset": 2048, 00:15:19.459 "data_size": 63488 00:15:19.459 }, 00:15:19.459 { 00:15:19.459 "name": null, 00:15:19.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.459 "is_configured": false, 00:15:19.459 "data_offset": 0, 00:15:19.459 "data_size": 63488 00:15:19.459 }, 00:15:19.459 { 00:15:19.459 "name": "BaseBdev3", 00:15:19.459 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:19.459 "is_configured": true, 00:15:19.459 "data_offset": 2048, 00:15:19.459 "data_size": 63488 00:15:19.459 }, 00:15:19.459 { 00:15:19.459 "name": "BaseBdev4", 00:15:19.459 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:19.459 "is_configured": true, 00:15:19.459 "data_offset": 2048, 00:15:19.459 "data_size": 63488 00:15:19.459 } 00:15:19.459 ] 00:15:19.459 }' 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.459 12:46:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.459 12:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.459 "name": "raid_bdev1", 00:15:19.459 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:19.459 "strip_size_kb": 0, 00:15:19.459 "state": "online", 00:15:19.459 "raid_level": "raid1", 00:15:19.459 "superblock": true, 00:15:19.459 "num_base_bdevs": 4, 00:15:19.459 "num_base_bdevs_discovered": 3, 00:15:19.459 "num_base_bdevs_operational": 3, 00:15:19.459 "process": { 00:15:19.459 "type": "rebuild", 00:15:19.459 "target": "spare", 00:15:19.459 "progress": { 00:15:19.459 "blocks": 18432, 00:15:19.459 "percent": 29 00:15:19.459 } 00:15:19.459 }, 00:15:19.459 "base_bdevs_list": [ 00:15:19.459 { 00:15:19.459 "name": "spare", 00:15:19.459 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:19.459 "is_configured": true, 00:15:19.459 "data_offset": 2048, 00:15:19.459 "data_size": 63488 00:15:19.459 }, 00:15:19.459 { 00:15:19.459 "name": null, 00:15:19.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.459 "is_configured": false, 00:15:19.459 "data_offset": 0, 00:15:19.459 "data_size": 63488 00:15:19.459 }, 00:15:19.459 { 00:15:19.459 "name": "BaseBdev3", 00:15:19.459 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:19.459 "is_configured": true, 00:15:19.459 "data_offset": 2048, 00:15:19.459 "data_size": 63488 00:15:19.459 }, 00:15:19.459 { 00:15:19.459 "name": "BaseBdev4", 00:15:19.459 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:19.459 "is_configured": true, 00:15:19.459 "data_offset": 2048, 00:15:19.459 "data_size": 63488 00:15:19.459 } 00:15:19.459 ] 00:15:19.459 }' 00:15:19.459 12:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.459 [2024-11-06 12:46:08.045724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:19.459 12:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.459 12:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.717 12:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.717 12:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.717 [2024-11-06 12:46:08.156253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:19.975 [2024-11-06 12:46:08.432607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:19.975 [2024-11-06 12:46:08.433038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:19.975 [2024-11-06 12:46:08.553026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:19.975 [2024-11-06 12:46:08.553625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:20.492 99.60 IOPS, 298.80 MiB/s [2024-11-06T12:46:09.149Z] [2024-11-06 12:46:08.894860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:20.492 [2024-11-06 12:46:08.895995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:20.492 [2024-11-06 12:46:09.127488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:20.492 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.492 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.492 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.492 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.750 "name": "raid_bdev1", 00:15:20.750 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:20.750 "strip_size_kb": 0, 00:15:20.750 "state": "online", 00:15:20.750 "raid_level": "raid1", 00:15:20.750 "superblock": true, 00:15:20.750 "num_base_bdevs": 4, 00:15:20.750 "num_base_bdevs_discovered": 3, 00:15:20.750 "num_base_bdevs_operational": 3, 00:15:20.750 "process": { 00:15:20.750 "type": "rebuild", 00:15:20.750 "target": "spare", 00:15:20.750 "progress": { 00:15:20.750 "blocks": 34816, 00:15:20.750 "percent": 54 00:15:20.750 } 00:15:20.750 }, 00:15:20.750 "base_bdevs_list": [ 00:15:20.750 { 00:15:20.750 "name": "spare", 00:15:20.750 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:20.750 "is_configured": true, 00:15:20.750 "data_offset": 2048, 00:15:20.750 "data_size": 63488 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "name": null, 00:15:20.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.750 "is_configured": false, 00:15:20.750 "data_offset": 0, 00:15:20.750 "data_size": 63488 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "name": "BaseBdev3", 00:15:20.750 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:20.750 "is_configured": true, 00:15:20.750 "data_offset": 2048, 00:15:20.750 "data_size": 63488 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "name": "BaseBdev4", 00:15:20.750 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:20.750 "is_configured": true, 00:15:20.750 "data_offset": 2048, 00:15:20.750 "data_size": 63488 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 }' 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.750 12:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.008 [2024-11-06 12:46:09.462103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:21.268 [2024-11-06 12:46:09.825883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:21.837 89.67 IOPS, 269.00 MiB/s [2024-11-06T12:46:10.494Z] 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.837 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.837 "name": "raid_bdev1", 00:15:21.837 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:21.837 "strip_size_kb": 0, 00:15:21.837 "state": "online", 00:15:21.837 "raid_level": "raid1", 00:15:21.837 "superblock": true, 00:15:21.837 "num_base_bdevs": 4, 00:15:21.837 "num_base_bdevs_discovered": 3, 00:15:21.837 "num_base_bdevs_operational": 3, 00:15:21.837 "process": { 00:15:21.837 "type": "rebuild", 00:15:21.837 "target": "spare", 00:15:21.837 "progress": { 00:15:21.837 "blocks": 51200, 00:15:21.838 "percent": 80 00:15:21.838 } 00:15:21.838 }, 00:15:21.838 "base_bdevs_list": [ 00:15:21.838 { 00:15:21.838 "name": "spare", 00:15:21.838 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:21.838 "is_configured": true, 00:15:21.838 "data_offset": 2048, 00:15:21.838 "data_size": 63488 00:15:21.838 }, 00:15:21.838 { 00:15:21.838 "name": null, 00:15:21.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.838 "is_configured": false, 00:15:21.838 "data_offset": 0, 00:15:21.838 "data_size": 63488 00:15:21.838 }, 00:15:21.838 { 00:15:21.838 "name": "BaseBdev3", 00:15:21.838 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:21.838 "is_configured": true, 00:15:21.838 "data_offset": 2048, 00:15:21.838 "data_size": 63488 00:15:21.838 }, 00:15:21.838 { 00:15:21.838 "name": "BaseBdev4", 00:15:21.838 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:21.838 "is_configured": true, 00:15:21.838 "data_offset": 2048, 00:15:21.838 "data_size": 63488 00:15:21.838 } 00:15:21.838 ] 00:15:21.838 }' 00:15:21.838 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.838 [2024-11-06 12:46:10.390169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:21.838 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.838 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.838 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.838 12:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.403 82.71 IOPS, 248.14 MiB/s [2024-11-06T12:46:11.060Z] [2024-11-06 12:46:10.966854] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:22.662 [2024-11-06 12:46:11.074778] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:22.662 [2024-11-06 12:46:11.079483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.921 "name": "raid_bdev1", 00:15:22.921 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:22.921 "strip_size_kb": 0, 00:15:22.921 "state": "online", 00:15:22.921 "raid_level": "raid1", 00:15:22.921 "superblock": true, 00:15:22.921 "num_base_bdevs": 4, 00:15:22.921 "num_base_bdevs_discovered": 3, 00:15:22.921 "num_base_bdevs_operational": 3, 00:15:22.921 "base_bdevs_list": [ 00:15:22.921 { 00:15:22.921 "name": "spare", 00:15:22.921 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:22.921 "is_configured": true, 00:15:22.921 "data_offset": 2048, 00:15:22.921 "data_size": 63488 00:15:22.921 }, 00:15:22.921 { 00:15:22.921 "name": null, 00:15:22.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.921 "is_configured": false, 00:15:22.921 "data_offset": 0, 00:15:22.921 "data_size": 63488 00:15:22.921 }, 00:15:22.921 { 00:15:22.921 "name": "BaseBdev3", 00:15:22.921 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:22.921 "is_configured": true, 00:15:22.921 "data_offset": 2048, 00:15:22.921 "data_size": 63488 00:15:22.921 }, 00:15:22.921 { 00:15:22.921 "name": "BaseBdev4", 00:15:22.921 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:22.921 "is_configured": true, 00:15:22.921 "data_offset": 2048, 00:15:22.921 "data_size": 63488 00:15:22.921 } 00:15:22.921 ] 00:15:22.921 }' 00:15:22.921 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.180 "name": "raid_bdev1", 00:15:23.180 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:23.180 "strip_size_kb": 0, 00:15:23.180 "state": "online", 00:15:23.180 "raid_level": "raid1", 00:15:23.180 "superblock": true, 00:15:23.180 "num_base_bdevs": 4, 00:15:23.180 "num_base_bdevs_discovered": 3, 00:15:23.180 "num_base_bdevs_operational": 3, 00:15:23.180 "base_bdevs_list": [ 00:15:23.180 { 00:15:23.180 "name": "spare", 00:15:23.180 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:23.180 "is_configured": true, 00:15:23.180 "data_offset": 2048, 00:15:23.180 "data_size": 63488 00:15:23.180 }, 00:15:23.180 { 00:15:23.180 "name": null, 00:15:23.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.180 "is_configured": false, 00:15:23.180 "data_offset": 0, 00:15:23.180 "data_size": 63488 00:15:23.180 }, 00:15:23.180 { 00:15:23.180 "name": "BaseBdev3", 00:15:23.180 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:23.180 "is_configured": true, 00:15:23.180 "data_offset": 2048, 00:15:23.180 "data_size": 63488 00:15:23.180 }, 00:15:23.180 { 00:15:23.180 "name": "BaseBdev4", 00:15:23.180 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:23.180 "is_configured": true, 00:15:23.180 "data_offset": 2048, 00:15:23.180 "data_size": 63488 00:15:23.180 } 00:15:23.180 ] 00:15:23.180 }' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.180 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.439 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.439 "name": "raid_bdev1", 00:15:23.439 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:23.439 "strip_size_kb": 0, 00:15:23.439 "state": "online", 00:15:23.439 "raid_level": "raid1", 00:15:23.439 "superblock": true, 00:15:23.439 "num_base_bdevs": 4, 00:15:23.439 "num_base_bdevs_discovered": 3, 00:15:23.439 "num_base_bdevs_operational": 3, 00:15:23.439 "base_bdevs_list": [ 00:15:23.439 { 00:15:23.439 "name": "spare", 00:15:23.439 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:23.439 "is_configured": true, 00:15:23.439 "data_offset": 2048, 00:15:23.439 "data_size": 63488 00:15:23.439 }, 00:15:23.439 { 00:15:23.439 "name": null, 00:15:23.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.439 "is_configured": false, 00:15:23.439 "data_offset": 0, 00:15:23.439 "data_size": 63488 00:15:23.439 }, 00:15:23.439 { 00:15:23.439 "name": "BaseBdev3", 00:15:23.439 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:23.439 "is_configured": true, 00:15:23.439 "data_offset": 2048, 00:15:23.439 "data_size": 63488 00:15:23.439 }, 00:15:23.439 { 00:15:23.439 "name": "BaseBdev4", 00:15:23.439 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:23.439 "is_configured": true, 00:15:23.439 "data_offset": 2048, 00:15:23.439 "data_size": 63488 00:15:23.439 } 00:15:23.439 ] 00:15:23.439 }' 00:15:23.439 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.439 12:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.697 77.25 IOPS, 231.75 MiB/s [2024-11-06T12:46:12.354Z] 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.697 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.697 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.697 [2024-11-06 12:46:12.341933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.697 [2024-11-06 12:46:12.341986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.956 00:15:23.956 Latency(us) 00:15:23.956 [2024-11-06T12:46:12.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.956 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:23.956 raid_bdev1 : 8.56 74.63 223.90 0.00 0.00 17547.93 288.58 122016.12 00:15:23.956 [2024-11-06T12:46:12.613Z] =================================================================================================================== 00:15:23.956 [2024-11-06T12:46:12.613Z] Total : 74.63 223.90 0.00 0.00 17547.93 288.58 122016.12 00:15:23.956 [2024-11-06 12:46:12.429911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.956 [2024-11-06 12:46:12.429994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.956 [2024-11-06 12:46:12.430136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.956 [2024-11-06 12:46:12.430154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:23.956 { 00:15:23.956 "results": [ 00:15:23.956 { 00:15:23.956 "job": "raid_bdev1", 00:15:23.956 "core_mask": "0x1", 00:15:23.956 "workload": "randrw", 00:15:23.956 "percentage": 50, 00:15:23.956 "status": "finished", 00:15:23.956 "queue_depth": 2, 00:15:23.956 "io_size": 3145728, 00:15:23.956 "runtime": 8.56201, 00:15:23.956 "iops": 74.63200813827594, 00:15:23.956 "mibps": 223.89602441482782, 00:15:23.956 "io_failed": 0, 00:15:23.956 "io_timeout": 0, 00:15:23.956 "avg_latency_us": 17547.932098449284, 00:15:23.956 "min_latency_us": 288.58181818181816, 00:15:23.956 "max_latency_us": 122016.11636363636 00:15:23.956 } 00:15:23.956 ], 00:15:23.956 "core_count": 1 00:15:23.956 } 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.956 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:24.230 /dev/nbd0 00:15:24.230 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:24.230 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:24.230 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:24.230 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:24.230 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:24.230 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.231 1+0 records in 00:15:24.231 1+0 records out 00:15:24.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353942 s, 11.6 MB/s 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.231 12:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:24.490 /dev/nbd1 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.490 1+0 records in 00:15:24.490 1+0 records out 00:15:24.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033296 s, 12.3 MB/s 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.490 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.748 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.313 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:25.313 /dev/nbd1 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.571 1+0 records in 00:15:25.571 1+0 records out 00:15:25.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336085 s, 12.2 MB/s 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:25.571 12:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.571 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.829 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.087 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.087 [2024-11-06 12:46:14.708443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.087 [2024-11-06 12:46:14.708512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.087 [2024-11-06 12:46:14.708569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:26.087 [2024-11-06 12:46:14.708584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.087 [2024-11-06 12:46:14.711526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.087 [2024-11-06 12:46:14.711572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.087 [2024-11-06 12:46:14.711707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:26.087 [2024-11-06 12:46:14.711771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.088 [2024-11-06 12:46:14.711947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.088 [2024-11-06 12:46:14.712093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.088 spare 00:15:26.088 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.088 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:26.088 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.088 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 [2024-11-06 12:46:14.812264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:26.347 [2024-11-06 12:46:14.812303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:26.347 [2024-11-06 12:46:14.812686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:26.347 [2024-11-06 12:46:14.812937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:26.347 [2024-11-06 12:46:14.812970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:26.347 [2024-11-06 12:46:14.813212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.347 "name": "raid_bdev1", 00:15:26.347 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:26.347 "strip_size_kb": 0, 00:15:26.347 "state": "online", 00:15:26.347 "raid_level": "raid1", 00:15:26.347 "superblock": true, 00:15:26.347 "num_base_bdevs": 4, 00:15:26.347 "num_base_bdevs_discovered": 3, 00:15:26.347 "num_base_bdevs_operational": 3, 00:15:26.347 "base_bdevs_list": [ 00:15:26.347 { 00:15:26.347 "name": "spare", 00:15:26.347 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:26.347 "is_configured": true, 00:15:26.347 "data_offset": 2048, 00:15:26.347 "data_size": 63488 00:15:26.347 }, 00:15:26.347 { 00:15:26.347 "name": null, 00:15:26.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.347 "is_configured": false, 00:15:26.347 "data_offset": 2048, 00:15:26.347 "data_size": 63488 00:15:26.347 }, 00:15:26.347 { 00:15:26.347 "name": "BaseBdev3", 00:15:26.347 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:26.347 "is_configured": true, 00:15:26.347 "data_offset": 2048, 00:15:26.347 "data_size": 63488 00:15:26.347 }, 00:15:26.347 { 00:15:26.347 "name": "BaseBdev4", 00:15:26.347 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:26.347 "is_configured": true, 00:15:26.347 "data_offset": 2048, 00:15:26.347 "data_size": 63488 00:15:26.347 } 00:15:26.347 ] 00:15:26.347 }' 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.347 12:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.914 "name": "raid_bdev1", 00:15:26.914 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:26.914 "strip_size_kb": 0, 00:15:26.914 "state": "online", 00:15:26.914 "raid_level": "raid1", 00:15:26.914 "superblock": true, 00:15:26.914 "num_base_bdevs": 4, 00:15:26.914 "num_base_bdevs_discovered": 3, 00:15:26.914 "num_base_bdevs_operational": 3, 00:15:26.914 "base_bdevs_list": [ 00:15:26.914 { 00:15:26.914 "name": "spare", 00:15:26.914 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:26.914 "is_configured": true, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 }, 00:15:26.914 { 00:15:26.914 "name": null, 00:15:26.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.914 "is_configured": false, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 }, 00:15:26.914 { 00:15:26.914 "name": "BaseBdev3", 00:15:26.914 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:26.914 "is_configured": true, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 }, 00:15:26.914 { 00:15:26.914 "name": "BaseBdev4", 00:15:26.914 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:26.914 "is_configured": true, 00:15:26.914 "data_offset": 2048, 00:15:26.914 "data_size": 63488 00:15:26.914 } 00:15:26.914 ] 00:15:26.914 }' 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.914 [2024-11-06 12:46:15.545498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.914 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.237 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.237 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.237 "name": "raid_bdev1", 00:15:27.237 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:27.237 "strip_size_kb": 0, 00:15:27.237 "state": "online", 00:15:27.237 "raid_level": "raid1", 00:15:27.237 "superblock": true, 00:15:27.237 "num_base_bdevs": 4, 00:15:27.237 "num_base_bdevs_discovered": 2, 00:15:27.237 "num_base_bdevs_operational": 2, 00:15:27.237 "base_bdevs_list": [ 00:15:27.237 { 00:15:27.237 "name": null, 00:15:27.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.237 "is_configured": false, 00:15:27.237 "data_offset": 0, 00:15:27.237 "data_size": 63488 00:15:27.237 }, 00:15:27.237 { 00:15:27.237 "name": null, 00:15:27.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.237 "is_configured": false, 00:15:27.237 "data_offset": 2048, 00:15:27.237 "data_size": 63488 00:15:27.237 }, 00:15:27.237 { 00:15:27.237 "name": "BaseBdev3", 00:15:27.237 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:27.237 "is_configured": true, 00:15:27.237 "data_offset": 2048, 00:15:27.237 "data_size": 63488 00:15:27.237 }, 00:15:27.237 { 00:15:27.237 "name": "BaseBdev4", 00:15:27.237 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:27.237 "is_configured": true, 00:15:27.237 "data_offset": 2048, 00:15:27.237 "data_size": 63488 00:15:27.237 } 00:15:27.237 ] 00:15:27.237 }' 00:15:27.237 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.237 12:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.495 12:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.495 12:46:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.495 12:46:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.495 [2024-11-06 12:46:16.077755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.495 [2024-11-06 12:46:16.078016] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:27.495 [2024-11-06 12:46:16.078038] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:27.495 [2024-11-06 12:46:16.078097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.495 [2024-11-06 12:46:16.091919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:27.495 12:46:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.495 12:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:27.495 [2024-11-06 12:46:16.094457] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.872 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.873 "name": "raid_bdev1", 00:15:28.873 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:28.873 "strip_size_kb": 0, 00:15:28.873 "state": "online", 00:15:28.873 "raid_level": "raid1", 00:15:28.873 "superblock": true, 00:15:28.873 "num_base_bdevs": 4, 00:15:28.873 "num_base_bdevs_discovered": 3, 00:15:28.873 "num_base_bdevs_operational": 3, 00:15:28.873 "process": { 00:15:28.873 "type": "rebuild", 00:15:28.873 "target": "spare", 00:15:28.873 "progress": { 00:15:28.873 "blocks": 20480, 00:15:28.873 "percent": 32 00:15:28.873 } 00:15:28.873 }, 00:15:28.873 "base_bdevs_list": [ 00:15:28.873 { 00:15:28.873 "name": "spare", 00:15:28.873 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:28.873 "is_configured": true, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 }, 00:15:28.873 { 00:15:28.873 "name": null, 00:15:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.873 "is_configured": false, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 }, 00:15:28.873 { 00:15:28.873 "name": "BaseBdev3", 00:15:28.873 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:28.873 "is_configured": true, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 }, 00:15:28.873 { 00:15:28.873 "name": "BaseBdev4", 00:15:28.873 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:28.873 "is_configured": true, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 } 00:15:28.873 ] 00:15:28.873 }' 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.873 [2024-11-06 12:46:17.267998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.873 [2024-11-06 12:46:17.302759] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:28.873 [2024-11-06 12:46:17.302847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.873 [2024-11-06 12:46:17.302875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:28.873 [2024-11-06 12:46:17.302887] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.873 "name": "raid_bdev1", 00:15:28.873 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:28.873 "strip_size_kb": 0, 00:15:28.873 "state": "online", 00:15:28.873 "raid_level": "raid1", 00:15:28.873 "superblock": true, 00:15:28.873 "num_base_bdevs": 4, 00:15:28.873 "num_base_bdevs_discovered": 2, 00:15:28.873 "num_base_bdevs_operational": 2, 00:15:28.873 "base_bdevs_list": [ 00:15:28.873 { 00:15:28.873 "name": null, 00:15:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.873 "is_configured": false, 00:15:28.873 "data_offset": 0, 00:15:28.873 "data_size": 63488 00:15:28.873 }, 00:15:28.873 { 00:15:28.873 "name": null, 00:15:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.873 "is_configured": false, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 }, 00:15:28.873 { 00:15:28.873 "name": "BaseBdev3", 00:15:28.873 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:28.873 "is_configured": true, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 }, 00:15:28.873 { 00:15:28.873 "name": "BaseBdev4", 00:15:28.873 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:28.873 "is_configured": true, 00:15:28.873 "data_offset": 2048, 00:15:28.873 "data_size": 63488 00:15:28.873 } 00:15:28.873 ] 00:15:28.873 }' 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.873 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.440 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:29.440 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.440 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.440 [2024-11-06 12:46:17.857488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:29.440 [2024-11-06 12:46:17.857599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.440 [2024-11-06 12:46:17.857640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:29.440 [2024-11-06 12:46:17.857656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.440 [2024-11-06 12:46:17.858288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.440 [2024-11-06 12:46:17.858321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:29.440 [2024-11-06 12:46:17.858442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:29.440 [2024-11-06 12:46:17.858462] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:29.440 [2024-11-06 12:46:17.858479] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:29.440 [2024-11-06 12:46:17.858510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.440 [2024-11-06 12:46:17.872581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:29.440 spare 00:15:29.440 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.440 12:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:29.440 [2024-11-06 12:46:17.875063] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.376 "name": "raid_bdev1", 00:15:30.376 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:30.376 "strip_size_kb": 0, 00:15:30.376 "state": "online", 00:15:30.376 "raid_level": "raid1", 00:15:30.376 "superblock": true, 00:15:30.376 "num_base_bdevs": 4, 00:15:30.376 "num_base_bdevs_discovered": 3, 00:15:30.376 "num_base_bdevs_operational": 3, 00:15:30.376 "process": { 00:15:30.376 "type": "rebuild", 00:15:30.376 "target": "spare", 00:15:30.376 "progress": { 00:15:30.376 "blocks": 20480, 00:15:30.376 "percent": 32 00:15:30.376 } 00:15:30.376 }, 00:15:30.376 "base_bdevs_list": [ 00:15:30.376 { 00:15:30.376 "name": "spare", 00:15:30.376 "uuid": "28859247-f1e1-5edb-9960-f90a0f6320f5", 00:15:30.376 "is_configured": true, 00:15:30.376 "data_offset": 2048, 00:15:30.376 "data_size": 63488 00:15:30.376 }, 00:15:30.376 { 00:15:30.376 "name": null, 00:15:30.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.376 "is_configured": false, 00:15:30.376 "data_offset": 2048, 00:15:30.376 "data_size": 63488 00:15:30.376 }, 00:15:30.376 { 00:15:30.376 "name": "BaseBdev3", 00:15:30.376 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:30.376 "is_configured": true, 00:15:30.376 "data_offset": 2048, 00:15:30.376 "data_size": 63488 00:15:30.376 }, 00:15:30.376 { 00:15:30.376 "name": "BaseBdev4", 00:15:30.376 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:30.376 "is_configured": true, 00:15:30.376 "data_offset": 2048, 00:15:30.376 "data_size": 63488 00:15:30.376 } 00:15:30.376 ] 00:15:30.376 }' 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.376 12:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.635 [2024-11-06 12:46:19.048592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.635 [2024-11-06 12:46:19.083667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.635 [2024-11-06 12:46:19.083747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.635 [2024-11-06 12:46:19.083772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.635 [2024-11-06 12:46:19.083785] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.635 "name": "raid_bdev1", 00:15:30.635 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:30.635 "strip_size_kb": 0, 00:15:30.635 "state": "online", 00:15:30.635 "raid_level": "raid1", 00:15:30.635 "superblock": true, 00:15:30.635 "num_base_bdevs": 4, 00:15:30.635 "num_base_bdevs_discovered": 2, 00:15:30.635 "num_base_bdevs_operational": 2, 00:15:30.635 "base_bdevs_list": [ 00:15:30.635 { 00:15:30.635 "name": null, 00:15:30.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.635 "is_configured": false, 00:15:30.635 "data_offset": 0, 00:15:30.635 "data_size": 63488 00:15:30.635 }, 00:15:30.635 { 00:15:30.635 "name": null, 00:15:30.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.635 "is_configured": false, 00:15:30.635 "data_offset": 2048, 00:15:30.635 "data_size": 63488 00:15:30.635 }, 00:15:30.635 { 00:15:30.635 "name": "BaseBdev3", 00:15:30.635 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:30.635 "is_configured": true, 00:15:30.635 "data_offset": 2048, 00:15:30.635 "data_size": 63488 00:15:30.635 }, 00:15:30.635 { 00:15:30.635 "name": "BaseBdev4", 00:15:30.635 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:30.635 "is_configured": true, 00:15:30.635 "data_offset": 2048, 00:15:30.635 "data_size": 63488 00:15:30.635 } 00:15:30.635 ] 00:15:30.635 }' 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.635 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.203 "name": "raid_bdev1", 00:15:31.203 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:31.203 "strip_size_kb": 0, 00:15:31.203 "state": "online", 00:15:31.203 "raid_level": "raid1", 00:15:31.203 "superblock": true, 00:15:31.203 "num_base_bdevs": 4, 00:15:31.203 "num_base_bdevs_discovered": 2, 00:15:31.203 "num_base_bdevs_operational": 2, 00:15:31.203 "base_bdevs_list": [ 00:15:31.203 { 00:15:31.203 "name": null, 00:15:31.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.203 "is_configured": false, 00:15:31.203 "data_offset": 0, 00:15:31.203 "data_size": 63488 00:15:31.203 }, 00:15:31.203 { 00:15:31.203 "name": null, 00:15:31.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.203 "is_configured": false, 00:15:31.203 "data_offset": 2048, 00:15:31.203 "data_size": 63488 00:15:31.203 }, 00:15:31.203 { 00:15:31.203 "name": "BaseBdev3", 00:15:31.203 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:31.203 "is_configured": true, 00:15:31.203 "data_offset": 2048, 00:15:31.203 "data_size": 63488 00:15:31.203 }, 00:15:31.203 { 00:15:31.203 "name": "BaseBdev4", 00:15:31.203 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:31.203 "is_configured": true, 00:15:31.203 "data_offset": 2048, 00:15:31.203 "data_size": 63488 00:15:31.203 } 00:15:31.203 ] 00:15:31.203 }' 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.203 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.203 [2024-11-06 12:46:19.787173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.203 [2024-11-06 12:46:19.787256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.203 [2024-11-06 12:46:19.787283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:31.203 [2024-11-06 12:46:19.787300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.203 [2024-11-06 12:46:19.787892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.203 [2024-11-06 12:46:19.787945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.203 [2024-11-06 12:46:19.788042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:31.203 [2024-11-06 12:46:19.788067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:31.203 [2024-11-06 12:46:19.788079] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:31.203 [2024-11-06 12:46:19.788097] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:31.203 BaseBdev1 00:15:31.204 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.204 12:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.578 "name": "raid_bdev1", 00:15:32.578 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:32.578 "strip_size_kb": 0, 00:15:32.578 "state": "online", 00:15:32.578 "raid_level": "raid1", 00:15:32.578 "superblock": true, 00:15:32.578 "num_base_bdevs": 4, 00:15:32.578 "num_base_bdevs_discovered": 2, 00:15:32.578 "num_base_bdevs_operational": 2, 00:15:32.578 "base_bdevs_list": [ 00:15:32.578 { 00:15:32.578 "name": null, 00:15:32.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.578 "is_configured": false, 00:15:32.578 "data_offset": 0, 00:15:32.578 "data_size": 63488 00:15:32.578 }, 00:15:32.578 { 00:15:32.578 "name": null, 00:15:32.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.578 "is_configured": false, 00:15:32.578 "data_offset": 2048, 00:15:32.578 "data_size": 63488 00:15:32.578 }, 00:15:32.578 { 00:15:32.578 "name": "BaseBdev3", 00:15:32.578 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:32.578 "is_configured": true, 00:15:32.578 "data_offset": 2048, 00:15:32.578 "data_size": 63488 00:15:32.578 }, 00:15:32.578 { 00:15:32.578 "name": "BaseBdev4", 00:15:32.578 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:32.578 "is_configured": true, 00:15:32.578 "data_offset": 2048, 00:15:32.578 "data_size": 63488 00:15:32.578 } 00:15:32.578 ] 00:15:32.578 }' 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.578 12:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.837 "name": "raid_bdev1", 00:15:32.837 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:32.837 "strip_size_kb": 0, 00:15:32.837 "state": "online", 00:15:32.837 "raid_level": "raid1", 00:15:32.837 "superblock": true, 00:15:32.837 "num_base_bdevs": 4, 00:15:32.837 "num_base_bdevs_discovered": 2, 00:15:32.837 "num_base_bdevs_operational": 2, 00:15:32.837 "base_bdevs_list": [ 00:15:32.837 { 00:15:32.837 "name": null, 00:15:32.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.837 "is_configured": false, 00:15:32.837 "data_offset": 0, 00:15:32.837 "data_size": 63488 00:15:32.837 }, 00:15:32.837 { 00:15:32.837 "name": null, 00:15:32.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.837 "is_configured": false, 00:15:32.837 "data_offset": 2048, 00:15:32.837 "data_size": 63488 00:15:32.837 }, 00:15:32.837 { 00:15:32.837 "name": "BaseBdev3", 00:15:32.837 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:32.837 "is_configured": true, 00:15:32.837 "data_offset": 2048, 00:15:32.837 "data_size": 63488 00:15:32.837 }, 00:15:32.837 { 00:15:32.837 "name": "BaseBdev4", 00:15:32.837 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:32.837 "is_configured": true, 00:15:32.837 "data_offset": 2048, 00:15:32.837 "data_size": 63488 00:15:32.837 } 00:15:32.837 ] 00:15:32.837 }' 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.837 [2024-11-06 12:46:21.480004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.837 [2024-11-06 12:46:21.480210] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:32.837 [2024-11-06 12:46:21.480244] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:32.837 request: 00:15:32.837 { 00:15:32.837 "base_bdev": "BaseBdev1", 00:15:32.837 "raid_bdev": "raid_bdev1", 00:15:32.837 "method": "bdev_raid_add_base_bdev", 00:15:32.837 "req_id": 1 00:15:32.837 } 00:15:32.837 Got JSON-RPC error response 00:15:32.837 response: 00:15:32.837 { 00:15:32.837 "code": -22, 00:15:32.837 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:32.837 } 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.837 12:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.214 "name": "raid_bdev1", 00:15:34.214 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:34.214 "strip_size_kb": 0, 00:15:34.214 "state": "online", 00:15:34.214 "raid_level": "raid1", 00:15:34.214 "superblock": true, 00:15:34.214 "num_base_bdevs": 4, 00:15:34.214 "num_base_bdevs_discovered": 2, 00:15:34.214 "num_base_bdevs_operational": 2, 00:15:34.214 "base_bdevs_list": [ 00:15:34.214 { 00:15:34.214 "name": null, 00:15:34.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.214 "is_configured": false, 00:15:34.214 "data_offset": 0, 00:15:34.214 "data_size": 63488 00:15:34.214 }, 00:15:34.214 { 00:15:34.214 "name": null, 00:15:34.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.214 "is_configured": false, 00:15:34.214 "data_offset": 2048, 00:15:34.214 "data_size": 63488 00:15:34.214 }, 00:15:34.214 { 00:15:34.214 "name": "BaseBdev3", 00:15:34.214 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:34.214 "is_configured": true, 00:15:34.214 "data_offset": 2048, 00:15:34.214 "data_size": 63488 00:15:34.214 }, 00:15:34.214 { 00:15:34.214 "name": "BaseBdev4", 00:15:34.214 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:34.214 "is_configured": true, 00:15:34.214 "data_offset": 2048, 00:15:34.214 "data_size": 63488 00:15:34.214 } 00:15:34.214 ] 00:15:34.214 }' 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.214 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.473 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.474 12:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.474 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.474 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.474 "name": "raid_bdev1", 00:15:34.474 "uuid": "7c752790-34ac-4921-ad17-c77e774881fc", 00:15:34.474 "strip_size_kb": 0, 00:15:34.474 "state": "online", 00:15:34.474 "raid_level": "raid1", 00:15:34.474 "superblock": true, 00:15:34.474 "num_base_bdevs": 4, 00:15:34.474 "num_base_bdevs_discovered": 2, 00:15:34.474 "num_base_bdevs_operational": 2, 00:15:34.474 "base_bdevs_list": [ 00:15:34.474 { 00:15:34.474 "name": null, 00:15:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.474 "is_configured": false, 00:15:34.474 "data_offset": 0, 00:15:34.474 "data_size": 63488 00:15:34.474 }, 00:15:34.474 { 00:15:34.474 "name": null, 00:15:34.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.474 "is_configured": false, 00:15:34.474 "data_offset": 2048, 00:15:34.474 "data_size": 63488 00:15:34.474 }, 00:15:34.474 { 00:15:34.474 "name": "BaseBdev3", 00:15:34.474 "uuid": "869da8ee-d89c-5712-b7db-9d50e081d6ad", 00:15:34.474 "is_configured": true, 00:15:34.474 "data_offset": 2048, 00:15:34.474 "data_size": 63488 00:15:34.474 }, 00:15:34.474 { 00:15:34.474 "name": "BaseBdev4", 00:15:34.474 "uuid": "561f791b-6b50-57d2-b276-07ea3d63053c", 00:15:34.474 "is_configured": true, 00:15:34.474 "data_offset": 2048, 00:15:34.474 "data_size": 63488 00:15:34.474 } 00:15:34.474 ] 00:15:34.474 }' 00:15:34.474 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.474 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.474 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79576 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79576 ']' 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79576 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79576 00:15:34.733 killing process with pid 79576 00:15:34.733 Received shutdown signal, test time was about 19.329410 seconds 00:15:34.733 00:15:34.733 Latency(us) 00:15:34.733 [2024-11-06T12:46:23.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.733 [2024-11-06T12:46:23.390Z] =================================================================================================================== 00:15:34.733 [2024-11-06T12:46:23.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79576' 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79576 00:15:34.733 [2024-11-06 12:46:23.177279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.733 12:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79576 00:15:34.733 [2024-11-06 12:46:23.177463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.733 [2024-11-06 12:46:23.177575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.733 [2024-11-06 12:46:23.177593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:34.992 [2024-11-06 12:46:23.584005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.421 12:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:36.421 00:15:36.421 real 0m23.215s 00:15:36.421 user 0m31.463s 00:15:36.421 sys 0m2.538s 00:15:36.421 ************************************ 00:15:36.421 END TEST raid_rebuild_test_sb_io 00:15:36.421 ************************************ 00:15:36.421 12:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:36.421 12:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.421 12:46:24 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:36.421 12:46:24 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:36.421 12:46:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:36.421 12:46:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:36.421 12:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.421 ************************************ 00:15:36.421 START TEST raid5f_state_function_test 00:15:36.421 ************************************ 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.421 Process raid pid: 80315 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80315 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80315' 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80315 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:36.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80315 ']' 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.421 12:46:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.421 [2024-11-06 12:46:24.921978] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:15:36.421 [2024-11-06 12:46:24.922396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.680 [2024-11-06 12:46:25.101014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.680 [2024-11-06 12:46:25.251472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.938 [2024-11-06 12:46:25.462977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.938 [2024-11-06 12:46:25.463034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.505 [2024-11-06 12:46:25.901434] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.505 [2024-11-06 12:46:25.901523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.505 [2024-11-06 12:46:25.901544] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.505 [2024-11-06 12:46:25.901566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.505 [2024-11-06 12:46:25.901578] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.505 [2024-11-06 12:46:25.901614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.505 "name": "Existed_Raid", 00:15:37.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.505 "strip_size_kb": 64, 00:15:37.505 "state": "configuring", 00:15:37.505 "raid_level": "raid5f", 00:15:37.505 "superblock": false, 00:15:37.505 "num_base_bdevs": 3, 00:15:37.505 "num_base_bdevs_discovered": 0, 00:15:37.505 "num_base_bdevs_operational": 3, 00:15:37.505 "base_bdevs_list": [ 00:15:37.505 { 00:15:37.505 "name": "BaseBdev1", 00:15:37.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.505 "is_configured": false, 00:15:37.505 "data_offset": 0, 00:15:37.505 "data_size": 0 00:15:37.505 }, 00:15:37.505 { 00:15:37.505 "name": "BaseBdev2", 00:15:37.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.505 "is_configured": false, 00:15:37.505 "data_offset": 0, 00:15:37.505 "data_size": 0 00:15:37.505 }, 00:15:37.505 { 00:15:37.505 "name": "BaseBdev3", 00:15:37.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.505 "is_configured": false, 00:15:37.505 "data_offset": 0, 00:15:37.505 "data_size": 0 00:15:37.505 } 00:15:37.505 ] 00:15:37.505 }' 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.505 12:46:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.763 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.763 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.763 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.022 [2024-11-06 12:46:26.425546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.022 [2024-11-06 12:46:26.425840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.022 [2024-11-06 12:46:26.433481] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.022 [2024-11-06 12:46:26.433545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.022 [2024-11-06 12:46:26.433565] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.022 [2024-11-06 12:46:26.433601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.022 [2024-11-06 12:46:26.433614] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.022 [2024-11-06 12:46:26.433633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.022 [2024-11-06 12:46:26.478793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.022 BaseBdev1 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.022 [ 00:15:38.022 { 00:15:38.022 "name": "BaseBdev1", 00:15:38.022 "aliases": [ 00:15:38.022 "1048ace4-c3bc-4063-8b76-1c951b77fded" 00:15:38.022 ], 00:15:38.022 "product_name": "Malloc disk", 00:15:38.022 "block_size": 512, 00:15:38.022 "num_blocks": 65536, 00:15:38.022 "uuid": "1048ace4-c3bc-4063-8b76-1c951b77fded", 00:15:38.022 "assigned_rate_limits": { 00:15:38.022 "rw_ios_per_sec": 0, 00:15:38.022 "rw_mbytes_per_sec": 0, 00:15:38.022 "r_mbytes_per_sec": 0, 00:15:38.022 "w_mbytes_per_sec": 0 00:15:38.022 }, 00:15:38.022 "claimed": true, 00:15:38.022 "claim_type": "exclusive_write", 00:15:38.022 "zoned": false, 00:15:38.022 "supported_io_types": { 00:15:38.022 "read": true, 00:15:38.022 "write": true, 00:15:38.022 "unmap": true, 00:15:38.022 "flush": true, 00:15:38.022 "reset": true, 00:15:38.022 "nvme_admin": false, 00:15:38.022 "nvme_io": false, 00:15:38.022 "nvme_io_md": false, 00:15:38.022 "write_zeroes": true, 00:15:38.022 "zcopy": true, 00:15:38.022 "get_zone_info": false, 00:15:38.022 "zone_management": false, 00:15:38.022 "zone_append": false, 00:15:38.022 "compare": false, 00:15:38.022 "compare_and_write": false, 00:15:38.022 "abort": true, 00:15:38.022 "seek_hole": false, 00:15:38.022 "seek_data": false, 00:15:38.022 "copy": true, 00:15:38.022 "nvme_iov_md": false 00:15:38.022 }, 00:15:38.022 "memory_domains": [ 00:15:38.022 { 00:15:38.022 "dma_device_id": "system", 00:15:38.022 "dma_device_type": 1 00:15:38.022 }, 00:15:38.022 { 00:15:38.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.022 "dma_device_type": 2 00:15:38.022 } 00:15:38.022 ], 00:15:38.022 "driver_specific": {} 00:15:38.022 } 00:15:38.022 ] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.022 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.023 "name": "Existed_Raid", 00:15:38.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.023 "strip_size_kb": 64, 00:15:38.023 "state": "configuring", 00:15:38.023 "raid_level": "raid5f", 00:15:38.023 "superblock": false, 00:15:38.023 "num_base_bdevs": 3, 00:15:38.023 "num_base_bdevs_discovered": 1, 00:15:38.023 "num_base_bdevs_operational": 3, 00:15:38.023 "base_bdevs_list": [ 00:15:38.023 { 00:15:38.023 "name": "BaseBdev1", 00:15:38.023 "uuid": "1048ace4-c3bc-4063-8b76-1c951b77fded", 00:15:38.023 "is_configured": true, 00:15:38.023 "data_offset": 0, 00:15:38.023 "data_size": 65536 00:15:38.023 }, 00:15:38.023 { 00:15:38.023 "name": "BaseBdev2", 00:15:38.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.023 "is_configured": false, 00:15:38.023 "data_offset": 0, 00:15:38.023 "data_size": 0 00:15:38.023 }, 00:15:38.023 { 00:15:38.023 "name": "BaseBdev3", 00:15:38.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.023 "is_configured": false, 00:15:38.023 "data_offset": 0, 00:15:38.023 "data_size": 0 00:15:38.023 } 00:15:38.023 ] 00:15:38.023 }' 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.023 12:46:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.589 [2024-11-06 12:46:27.043015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.589 [2024-11-06 12:46:27.043089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.589 [2024-11-06 12:46:27.051068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.589 [2024-11-06 12:46:27.053543] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.589 [2024-11-06 12:46:27.053604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.589 [2024-11-06 12:46:27.053624] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.589 [2024-11-06 12:46:27.053645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.589 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.590 "name": "Existed_Raid", 00:15:38.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.590 "strip_size_kb": 64, 00:15:38.590 "state": "configuring", 00:15:38.590 "raid_level": "raid5f", 00:15:38.590 "superblock": false, 00:15:38.590 "num_base_bdevs": 3, 00:15:38.590 "num_base_bdevs_discovered": 1, 00:15:38.590 "num_base_bdevs_operational": 3, 00:15:38.590 "base_bdevs_list": [ 00:15:38.590 { 00:15:38.590 "name": "BaseBdev1", 00:15:38.590 "uuid": "1048ace4-c3bc-4063-8b76-1c951b77fded", 00:15:38.590 "is_configured": true, 00:15:38.590 "data_offset": 0, 00:15:38.590 "data_size": 65536 00:15:38.590 }, 00:15:38.590 { 00:15:38.590 "name": "BaseBdev2", 00:15:38.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.590 "is_configured": false, 00:15:38.590 "data_offset": 0, 00:15:38.590 "data_size": 0 00:15:38.590 }, 00:15:38.590 { 00:15:38.590 "name": "BaseBdev3", 00:15:38.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.590 "is_configured": false, 00:15:38.590 "data_offset": 0, 00:15:38.590 "data_size": 0 00:15:38.590 } 00:15:38.590 ] 00:15:38.590 }' 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.590 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 [2024-11-06 12:46:27.545292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.155 BaseBdev2 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.155 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.155 [ 00:15:39.155 { 00:15:39.155 "name": "BaseBdev2", 00:15:39.155 "aliases": [ 00:15:39.155 "efa1c2bf-0a46-4c26-8f26-cbfc6cda97e5" 00:15:39.155 ], 00:15:39.155 "product_name": "Malloc disk", 00:15:39.156 "block_size": 512, 00:15:39.156 "num_blocks": 65536, 00:15:39.156 "uuid": "efa1c2bf-0a46-4c26-8f26-cbfc6cda97e5", 00:15:39.156 "assigned_rate_limits": { 00:15:39.156 "rw_ios_per_sec": 0, 00:15:39.156 "rw_mbytes_per_sec": 0, 00:15:39.156 "r_mbytes_per_sec": 0, 00:15:39.156 "w_mbytes_per_sec": 0 00:15:39.156 }, 00:15:39.156 "claimed": true, 00:15:39.156 "claim_type": "exclusive_write", 00:15:39.156 "zoned": false, 00:15:39.156 "supported_io_types": { 00:15:39.156 "read": true, 00:15:39.156 "write": true, 00:15:39.156 "unmap": true, 00:15:39.156 "flush": true, 00:15:39.156 "reset": true, 00:15:39.156 "nvme_admin": false, 00:15:39.156 "nvme_io": false, 00:15:39.156 "nvme_io_md": false, 00:15:39.156 "write_zeroes": true, 00:15:39.156 "zcopy": true, 00:15:39.156 "get_zone_info": false, 00:15:39.156 "zone_management": false, 00:15:39.156 "zone_append": false, 00:15:39.156 "compare": false, 00:15:39.156 "compare_and_write": false, 00:15:39.156 "abort": true, 00:15:39.156 "seek_hole": false, 00:15:39.156 "seek_data": false, 00:15:39.156 "copy": true, 00:15:39.156 "nvme_iov_md": false 00:15:39.156 }, 00:15:39.156 "memory_domains": [ 00:15:39.156 { 00:15:39.156 "dma_device_id": "system", 00:15:39.156 "dma_device_type": 1 00:15:39.156 }, 00:15:39.156 { 00:15:39.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.156 "dma_device_type": 2 00:15:39.156 } 00:15:39.156 ], 00:15:39.156 "driver_specific": {} 00:15:39.156 } 00:15:39.156 ] 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.156 "name": "Existed_Raid", 00:15:39.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.156 "strip_size_kb": 64, 00:15:39.156 "state": "configuring", 00:15:39.156 "raid_level": "raid5f", 00:15:39.156 "superblock": false, 00:15:39.156 "num_base_bdevs": 3, 00:15:39.156 "num_base_bdevs_discovered": 2, 00:15:39.156 "num_base_bdevs_operational": 3, 00:15:39.156 "base_bdevs_list": [ 00:15:39.156 { 00:15:39.156 "name": "BaseBdev1", 00:15:39.156 "uuid": "1048ace4-c3bc-4063-8b76-1c951b77fded", 00:15:39.156 "is_configured": true, 00:15:39.156 "data_offset": 0, 00:15:39.156 "data_size": 65536 00:15:39.156 }, 00:15:39.156 { 00:15:39.156 "name": "BaseBdev2", 00:15:39.156 "uuid": "efa1c2bf-0a46-4c26-8f26-cbfc6cda97e5", 00:15:39.156 "is_configured": true, 00:15:39.156 "data_offset": 0, 00:15:39.156 "data_size": 65536 00:15:39.156 }, 00:15:39.156 { 00:15:39.156 "name": "BaseBdev3", 00:15:39.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.156 "is_configured": false, 00:15:39.156 "data_offset": 0, 00:15:39.156 "data_size": 0 00:15:39.156 } 00:15:39.156 ] 00:15:39.156 }' 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.156 12:46:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.487 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:39.487 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.487 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.487 [2024-11-06 12:46:28.121778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.487 [2024-11-06 12:46:28.121871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:39.487 [2024-11-06 12:46:28.121895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:39.487 [2024-11-06 12:46:28.122263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:39.745 [2024-11-06 12:46:28.127729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:39.745 [2024-11-06 12:46:28.127763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:39.745 [2024-11-06 12:46:28.128215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.745 BaseBdev3 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.745 [ 00:15:39.745 { 00:15:39.745 "name": "BaseBdev3", 00:15:39.745 "aliases": [ 00:15:39.745 "fb903101-c391-4b0b-bdfb-3e5bbe160fff" 00:15:39.745 ], 00:15:39.745 "product_name": "Malloc disk", 00:15:39.745 "block_size": 512, 00:15:39.745 "num_blocks": 65536, 00:15:39.745 "uuid": "fb903101-c391-4b0b-bdfb-3e5bbe160fff", 00:15:39.745 "assigned_rate_limits": { 00:15:39.745 "rw_ios_per_sec": 0, 00:15:39.745 "rw_mbytes_per_sec": 0, 00:15:39.745 "r_mbytes_per_sec": 0, 00:15:39.745 "w_mbytes_per_sec": 0 00:15:39.745 }, 00:15:39.745 "claimed": true, 00:15:39.745 "claim_type": "exclusive_write", 00:15:39.745 "zoned": false, 00:15:39.745 "supported_io_types": { 00:15:39.745 "read": true, 00:15:39.745 "write": true, 00:15:39.745 "unmap": true, 00:15:39.745 "flush": true, 00:15:39.745 "reset": true, 00:15:39.745 "nvme_admin": false, 00:15:39.745 "nvme_io": false, 00:15:39.745 "nvme_io_md": false, 00:15:39.745 "write_zeroes": true, 00:15:39.745 "zcopy": true, 00:15:39.745 "get_zone_info": false, 00:15:39.745 "zone_management": false, 00:15:39.745 "zone_append": false, 00:15:39.745 "compare": false, 00:15:39.745 "compare_and_write": false, 00:15:39.745 "abort": true, 00:15:39.745 "seek_hole": false, 00:15:39.745 "seek_data": false, 00:15:39.745 "copy": true, 00:15:39.745 "nvme_iov_md": false 00:15:39.745 }, 00:15:39.745 "memory_domains": [ 00:15:39.745 { 00:15:39.745 "dma_device_id": "system", 00:15:39.745 "dma_device_type": 1 00:15:39.745 }, 00:15:39.745 { 00:15:39.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.745 "dma_device_type": 2 00:15:39.745 } 00:15:39.745 ], 00:15:39.745 "driver_specific": {} 00:15:39.745 } 00:15:39.745 ] 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.745 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.746 "name": "Existed_Raid", 00:15:39.746 "uuid": "6892e21e-cef0-4662-a46b-f10fac393975", 00:15:39.746 "strip_size_kb": 64, 00:15:39.746 "state": "online", 00:15:39.746 "raid_level": "raid5f", 00:15:39.746 "superblock": false, 00:15:39.746 "num_base_bdevs": 3, 00:15:39.746 "num_base_bdevs_discovered": 3, 00:15:39.746 "num_base_bdevs_operational": 3, 00:15:39.746 "base_bdevs_list": [ 00:15:39.746 { 00:15:39.746 "name": "BaseBdev1", 00:15:39.746 "uuid": "1048ace4-c3bc-4063-8b76-1c951b77fded", 00:15:39.746 "is_configured": true, 00:15:39.746 "data_offset": 0, 00:15:39.746 "data_size": 65536 00:15:39.746 }, 00:15:39.746 { 00:15:39.746 "name": "BaseBdev2", 00:15:39.746 "uuid": "efa1c2bf-0a46-4c26-8f26-cbfc6cda97e5", 00:15:39.746 "is_configured": true, 00:15:39.746 "data_offset": 0, 00:15:39.746 "data_size": 65536 00:15:39.746 }, 00:15:39.746 { 00:15:39.746 "name": "BaseBdev3", 00:15:39.746 "uuid": "fb903101-c391-4b0b-bdfb-3e5bbe160fff", 00:15:39.746 "is_configured": true, 00:15:39.746 "data_offset": 0, 00:15:39.746 "data_size": 65536 00:15:39.746 } 00:15:39.746 ] 00:15:39.746 }' 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.746 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.312 [2024-11-06 12:46:28.698395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.312 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:40.312 "name": "Existed_Raid", 00:15:40.312 "aliases": [ 00:15:40.313 "6892e21e-cef0-4662-a46b-f10fac393975" 00:15:40.313 ], 00:15:40.313 "product_name": "Raid Volume", 00:15:40.313 "block_size": 512, 00:15:40.313 "num_blocks": 131072, 00:15:40.313 "uuid": "6892e21e-cef0-4662-a46b-f10fac393975", 00:15:40.313 "assigned_rate_limits": { 00:15:40.313 "rw_ios_per_sec": 0, 00:15:40.313 "rw_mbytes_per_sec": 0, 00:15:40.313 "r_mbytes_per_sec": 0, 00:15:40.313 "w_mbytes_per_sec": 0 00:15:40.313 }, 00:15:40.313 "claimed": false, 00:15:40.313 "zoned": false, 00:15:40.313 "supported_io_types": { 00:15:40.313 "read": true, 00:15:40.313 "write": true, 00:15:40.313 "unmap": false, 00:15:40.313 "flush": false, 00:15:40.313 "reset": true, 00:15:40.313 "nvme_admin": false, 00:15:40.313 "nvme_io": false, 00:15:40.313 "nvme_io_md": false, 00:15:40.313 "write_zeroes": true, 00:15:40.313 "zcopy": false, 00:15:40.313 "get_zone_info": false, 00:15:40.313 "zone_management": false, 00:15:40.313 "zone_append": false, 00:15:40.313 "compare": false, 00:15:40.313 "compare_and_write": false, 00:15:40.313 "abort": false, 00:15:40.313 "seek_hole": false, 00:15:40.313 "seek_data": false, 00:15:40.313 "copy": false, 00:15:40.313 "nvme_iov_md": false 00:15:40.313 }, 00:15:40.313 "driver_specific": { 00:15:40.313 "raid": { 00:15:40.313 "uuid": "6892e21e-cef0-4662-a46b-f10fac393975", 00:15:40.313 "strip_size_kb": 64, 00:15:40.313 "state": "online", 00:15:40.313 "raid_level": "raid5f", 00:15:40.313 "superblock": false, 00:15:40.313 "num_base_bdevs": 3, 00:15:40.313 "num_base_bdevs_discovered": 3, 00:15:40.313 "num_base_bdevs_operational": 3, 00:15:40.313 "base_bdevs_list": [ 00:15:40.313 { 00:15:40.313 "name": "BaseBdev1", 00:15:40.313 "uuid": "1048ace4-c3bc-4063-8b76-1c951b77fded", 00:15:40.313 "is_configured": true, 00:15:40.313 "data_offset": 0, 00:15:40.313 "data_size": 65536 00:15:40.313 }, 00:15:40.313 { 00:15:40.313 "name": "BaseBdev2", 00:15:40.313 "uuid": "efa1c2bf-0a46-4c26-8f26-cbfc6cda97e5", 00:15:40.313 "is_configured": true, 00:15:40.313 "data_offset": 0, 00:15:40.313 "data_size": 65536 00:15:40.313 }, 00:15:40.313 { 00:15:40.313 "name": "BaseBdev3", 00:15:40.313 "uuid": "fb903101-c391-4b0b-bdfb-3e5bbe160fff", 00:15:40.313 "is_configured": true, 00:15:40.313 "data_offset": 0, 00:15:40.313 "data_size": 65536 00:15:40.313 } 00:15:40.313 ] 00:15:40.313 } 00:15:40.313 } 00:15:40.313 }' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:40.313 BaseBdev2 00:15:40.313 BaseBdev3' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.313 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.571 12:46:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.571 [2024-11-06 12:46:29.010177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.571 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.572 "name": "Existed_Raid", 00:15:40.572 "uuid": "6892e21e-cef0-4662-a46b-f10fac393975", 00:15:40.572 "strip_size_kb": 64, 00:15:40.572 "state": "online", 00:15:40.572 "raid_level": "raid5f", 00:15:40.572 "superblock": false, 00:15:40.572 "num_base_bdevs": 3, 00:15:40.572 "num_base_bdevs_discovered": 2, 00:15:40.572 "num_base_bdevs_operational": 2, 00:15:40.572 "base_bdevs_list": [ 00:15:40.572 { 00:15:40.572 "name": null, 00:15:40.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.572 "is_configured": false, 00:15:40.572 "data_offset": 0, 00:15:40.572 "data_size": 65536 00:15:40.572 }, 00:15:40.572 { 00:15:40.572 "name": "BaseBdev2", 00:15:40.572 "uuid": "efa1c2bf-0a46-4c26-8f26-cbfc6cda97e5", 00:15:40.572 "is_configured": true, 00:15:40.572 "data_offset": 0, 00:15:40.572 "data_size": 65536 00:15:40.572 }, 00:15:40.572 { 00:15:40.572 "name": "BaseBdev3", 00:15:40.572 "uuid": "fb903101-c391-4b0b-bdfb-3e5bbe160fff", 00:15:40.572 "is_configured": true, 00:15:40.572 "data_offset": 0, 00:15:40.572 "data_size": 65536 00:15:40.572 } 00:15:40.572 ] 00:15:40.572 }' 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.572 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.138 [2024-11-06 12:46:29.667347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.138 [2024-11-06 12:46:29.667512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.138 [2024-11-06 12:46:29.750917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.138 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.396 [2024-11-06 12:46:29.810940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.396 [2024-11-06 12:46:29.811006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.396 BaseBdev2 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.396 12:46:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.396 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.396 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.396 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.396 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.396 [ 00:15:41.396 { 00:15:41.396 "name": "BaseBdev2", 00:15:41.396 "aliases": [ 00:15:41.396 "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a" 00:15:41.396 ], 00:15:41.396 "product_name": "Malloc disk", 00:15:41.396 "block_size": 512, 00:15:41.396 "num_blocks": 65536, 00:15:41.396 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:41.396 "assigned_rate_limits": { 00:15:41.396 "rw_ios_per_sec": 0, 00:15:41.396 "rw_mbytes_per_sec": 0, 00:15:41.396 "r_mbytes_per_sec": 0, 00:15:41.396 "w_mbytes_per_sec": 0 00:15:41.396 }, 00:15:41.396 "claimed": false, 00:15:41.396 "zoned": false, 00:15:41.396 "supported_io_types": { 00:15:41.396 "read": true, 00:15:41.396 "write": true, 00:15:41.396 "unmap": true, 00:15:41.396 "flush": true, 00:15:41.396 "reset": true, 00:15:41.396 "nvme_admin": false, 00:15:41.396 "nvme_io": false, 00:15:41.396 "nvme_io_md": false, 00:15:41.396 "write_zeroes": true, 00:15:41.396 "zcopy": true, 00:15:41.396 "get_zone_info": false, 00:15:41.396 "zone_management": false, 00:15:41.396 "zone_append": false, 00:15:41.396 "compare": false, 00:15:41.396 "compare_and_write": false, 00:15:41.396 "abort": true, 00:15:41.396 "seek_hole": false, 00:15:41.396 "seek_data": false, 00:15:41.396 "copy": true, 00:15:41.396 "nvme_iov_md": false 00:15:41.396 }, 00:15:41.396 "memory_domains": [ 00:15:41.396 { 00:15:41.396 "dma_device_id": "system", 00:15:41.396 "dma_device_type": 1 00:15:41.396 }, 00:15:41.396 { 00:15:41.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.396 "dma_device_type": 2 00:15:41.396 } 00:15:41.396 ], 00:15:41.397 "driver_specific": {} 00:15:41.397 } 00:15:41.397 ] 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.397 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 BaseBdev3 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 [ 00:15:41.655 { 00:15:41.655 "name": "BaseBdev3", 00:15:41.655 "aliases": [ 00:15:41.655 "2804b6e5-85f0-4730-93cc-42504e0ad9e1" 00:15:41.655 ], 00:15:41.655 "product_name": "Malloc disk", 00:15:41.655 "block_size": 512, 00:15:41.655 "num_blocks": 65536, 00:15:41.655 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:41.655 "assigned_rate_limits": { 00:15:41.655 "rw_ios_per_sec": 0, 00:15:41.655 "rw_mbytes_per_sec": 0, 00:15:41.655 "r_mbytes_per_sec": 0, 00:15:41.655 "w_mbytes_per_sec": 0 00:15:41.655 }, 00:15:41.655 "claimed": false, 00:15:41.655 "zoned": false, 00:15:41.655 "supported_io_types": { 00:15:41.655 "read": true, 00:15:41.655 "write": true, 00:15:41.655 "unmap": true, 00:15:41.655 "flush": true, 00:15:41.655 "reset": true, 00:15:41.655 "nvme_admin": false, 00:15:41.655 "nvme_io": false, 00:15:41.655 "nvme_io_md": false, 00:15:41.655 "write_zeroes": true, 00:15:41.655 "zcopy": true, 00:15:41.655 "get_zone_info": false, 00:15:41.655 "zone_management": false, 00:15:41.655 "zone_append": false, 00:15:41.655 "compare": false, 00:15:41.655 "compare_and_write": false, 00:15:41.655 "abort": true, 00:15:41.655 "seek_hole": false, 00:15:41.655 "seek_data": false, 00:15:41.655 "copy": true, 00:15:41.655 "nvme_iov_md": false 00:15:41.655 }, 00:15:41.655 "memory_domains": [ 00:15:41.655 { 00:15:41.655 "dma_device_id": "system", 00:15:41.655 "dma_device_type": 1 00:15:41.655 }, 00:15:41.655 { 00:15:41.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.655 "dma_device_type": 2 00:15:41.655 } 00:15:41.655 ], 00:15:41.655 "driver_specific": {} 00:15:41.655 } 00:15:41.655 ] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 [2024-11-06 12:46:30.102163] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.655 [2024-11-06 12:46:30.102244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.655 [2024-11-06 12:46:30.102280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.655 [2024-11-06 12:46:30.104742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.655 "name": "Existed_Raid", 00:15:41.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.655 "strip_size_kb": 64, 00:15:41.655 "state": "configuring", 00:15:41.655 "raid_level": "raid5f", 00:15:41.655 "superblock": false, 00:15:41.655 "num_base_bdevs": 3, 00:15:41.655 "num_base_bdevs_discovered": 2, 00:15:41.655 "num_base_bdevs_operational": 3, 00:15:41.655 "base_bdevs_list": [ 00:15:41.655 { 00:15:41.655 "name": "BaseBdev1", 00:15:41.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.655 "is_configured": false, 00:15:41.655 "data_offset": 0, 00:15:41.655 "data_size": 0 00:15:41.655 }, 00:15:41.655 { 00:15:41.655 "name": "BaseBdev2", 00:15:41.655 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:41.655 "is_configured": true, 00:15:41.655 "data_offset": 0, 00:15:41.655 "data_size": 65536 00:15:41.655 }, 00:15:41.655 { 00:15:41.655 "name": "BaseBdev3", 00:15:41.655 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:41.655 "is_configured": true, 00:15:41.655 "data_offset": 0, 00:15:41.655 "data_size": 65536 00:15:41.655 } 00:15:41.655 ] 00:15:41.655 }' 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.655 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.222 [2024-11-06 12:46:30.594336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.222 "name": "Existed_Raid", 00:15:42.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.222 "strip_size_kb": 64, 00:15:42.222 "state": "configuring", 00:15:42.222 "raid_level": "raid5f", 00:15:42.222 "superblock": false, 00:15:42.222 "num_base_bdevs": 3, 00:15:42.222 "num_base_bdevs_discovered": 1, 00:15:42.222 "num_base_bdevs_operational": 3, 00:15:42.222 "base_bdevs_list": [ 00:15:42.222 { 00:15:42.222 "name": "BaseBdev1", 00:15:42.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.222 "is_configured": false, 00:15:42.222 "data_offset": 0, 00:15:42.222 "data_size": 0 00:15:42.222 }, 00:15:42.222 { 00:15:42.222 "name": null, 00:15:42.222 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:42.222 "is_configured": false, 00:15:42.222 "data_offset": 0, 00:15:42.222 "data_size": 65536 00:15:42.222 }, 00:15:42.222 { 00:15:42.222 "name": "BaseBdev3", 00:15:42.222 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:42.222 "is_configured": true, 00:15:42.222 "data_offset": 0, 00:15:42.222 "data_size": 65536 00:15:42.222 } 00:15:42.222 ] 00:15:42.222 }' 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.222 12:46:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.481 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.481 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.481 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.481 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.481 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.739 [2024-11-06 12:46:31.197115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.739 BaseBdev1 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.739 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.740 [ 00:15:42.740 { 00:15:42.740 "name": "BaseBdev1", 00:15:42.740 "aliases": [ 00:15:42.740 "1eba59b0-e5f4-45cb-8fc1-27cf959c27be" 00:15:42.740 ], 00:15:42.740 "product_name": "Malloc disk", 00:15:42.740 "block_size": 512, 00:15:42.740 "num_blocks": 65536, 00:15:42.740 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:42.740 "assigned_rate_limits": { 00:15:42.740 "rw_ios_per_sec": 0, 00:15:42.740 "rw_mbytes_per_sec": 0, 00:15:42.740 "r_mbytes_per_sec": 0, 00:15:42.740 "w_mbytes_per_sec": 0 00:15:42.740 }, 00:15:42.740 "claimed": true, 00:15:42.740 "claim_type": "exclusive_write", 00:15:42.740 "zoned": false, 00:15:42.740 "supported_io_types": { 00:15:42.740 "read": true, 00:15:42.740 "write": true, 00:15:42.740 "unmap": true, 00:15:42.740 "flush": true, 00:15:42.740 "reset": true, 00:15:42.740 "nvme_admin": false, 00:15:42.740 "nvme_io": false, 00:15:42.740 "nvme_io_md": false, 00:15:42.740 "write_zeroes": true, 00:15:42.740 "zcopy": true, 00:15:42.740 "get_zone_info": false, 00:15:42.740 "zone_management": false, 00:15:42.740 "zone_append": false, 00:15:42.740 "compare": false, 00:15:42.740 "compare_and_write": false, 00:15:42.740 "abort": true, 00:15:42.740 "seek_hole": false, 00:15:42.740 "seek_data": false, 00:15:42.740 "copy": true, 00:15:42.740 "nvme_iov_md": false 00:15:42.740 }, 00:15:42.740 "memory_domains": [ 00:15:42.740 { 00:15:42.740 "dma_device_id": "system", 00:15:42.740 "dma_device_type": 1 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.740 "dma_device_type": 2 00:15:42.740 } 00:15:42.740 ], 00:15:42.740 "driver_specific": {} 00:15:42.740 } 00:15:42.740 ] 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.740 "name": "Existed_Raid", 00:15:42.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.740 "strip_size_kb": 64, 00:15:42.740 "state": "configuring", 00:15:42.740 "raid_level": "raid5f", 00:15:42.740 "superblock": false, 00:15:42.740 "num_base_bdevs": 3, 00:15:42.740 "num_base_bdevs_discovered": 2, 00:15:42.740 "num_base_bdevs_operational": 3, 00:15:42.740 "base_bdevs_list": [ 00:15:42.740 { 00:15:42.740 "name": "BaseBdev1", 00:15:42.740 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 0, 00:15:42.740 "data_size": 65536 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "name": null, 00:15:42.740 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:42.740 "is_configured": false, 00:15:42.740 "data_offset": 0, 00:15:42.740 "data_size": 65536 00:15:42.740 }, 00:15:42.740 { 00:15:42.740 "name": "BaseBdev3", 00:15:42.740 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:42.740 "is_configured": true, 00:15:42.740 "data_offset": 0, 00:15:42.740 "data_size": 65536 00:15:42.740 } 00:15:42.740 ] 00:15:42.740 }' 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.740 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.307 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:43.307 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.307 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.308 [2024-11-06 12:46:31.785333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.308 "name": "Existed_Raid", 00:15:43.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.308 "strip_size_kb": 64, 00:15:43.308 "state": "configuring", 00:15:43.308 "raid_level": "raid5f", 00:15:43.308 "superblock": false, 00:15:43.308 "num_base_bdevs": 3, 00:15:43.308 "num_base_bdevs_discovered": 1, 00:15:43.308 "num_base_bdevs_operational": 3, 00:15:43.308 "base_bdevs_list": [ 00:15:43.308 { 00:15:43.308 "name": "BaseBdev1", 00:15:43.308 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:43.308 "is_configured": true, 00:15:43.308 "data_offset": 0, 00:15:43.308 "data_size": 65536 00:15:43.308 }, 00:15:43.308 { 00:15:43.308 "name": null, 00:15:43.308 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:43.308 "is_configured": false, 00:15:43.308 "data_offset": 0, 00:15:43.308 "data_size": 65536 00:15:43.308 }, 00:15:43.308 { 00:15:43.308 "name": null, 00:15:43.308 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:43.308 "is_configured": false, 00:15:43.308 "data_offset": 0, 00:15:43.308 "data_size": 65536 00:15:43.308 } 00:15:43.308 ] 00:15:43.308 }' 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.308 12:46:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.876 [2024-11-06 12:46:32.325513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.876 "name": "Existed_Raid", 00:15:43.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.876 "strip_size_kb": 64, 00:15:43.876 "state": "configuring", 00:15:43.876 "raid_level": "raid5f", 00:15:43.876 "superblock": false, 00:15:43.876 "num_base_bdevs": 3, 00:15:43.876 "num_base_bdevs_discovered": 2, 00:15:43.876 "num_base_bdevs_operational": 3, 00:15:43.876 "base_bdevs_list": [ 00:15:43.876 { 00:15:43.876 "name": "BaseBdev1", 00:15:43.876 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:43.876 "is_configured": true, 00:15:43.876 "data_offset": 0, 00:15:43.876 "data_size": 65536 00:15:43.876 }, 00:15:43.876 { 00:15:43.876 "name": null, 00:15:43.876 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:43.876 "is_configured": false, 00:15:43.876 "data_offset": 0, 00:15:43.876 "data_size": 65536 00:15:43.876 }, 00:15:43.876 { 00:15:43.876 "name": "BaseBdev3", 00:15:43.876 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:43.876 "is_configured": true, 00:15:43.876 "data_offset": 0, 00:15:43.876 "data_size": 65536 00:15:43.876 } 00:15:43.876 ] 00:15:43.876 }' 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.876 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.479 [2024-11-06 12:46:32.873754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.479 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.480 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.480 12:46:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.480 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.480 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.480 12:46:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.480 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.480 "name": "Existed_Raid", 00:15:44.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.480 "strip_size_kb": 64, 00:15:44.480 "state": "configuring", 00:15:44.480 "raid_level": "raid5f", 00:15:44.480 "superblock": false, 00:15:44.480 "num_base_bdevs": 3, 00:15:44.480 "num_base_bdevs_discovered": 1, 00:15:44.480 "num_base_bdevs_operational": 3, 00:15:44.480 "base_bdevs_list": [ 00:15:44.480 { 00:15:44.480 "name": null, 00:15:44.480 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:44.480 "is_configured": false, 00:15:44.480 "data_offset": 0, 00:15:44.480 "data_size": 65536 00:15:44.480 }, 00:15:44.480 { 00:15:44.480 "name": null, 00:15:44.480 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:44.480 "is_configured": false, 00:15:44.480 "data_offset": 0, 00:15:44.480 "data_size": 65536 00:15:44.480 }, 00:15:44.480 { 00:15:44.480 "name": "BaseBdev3", 00:15:44.480 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:44.480 "is_configured": true, 00:15:44.480 "data_offset": 0, 00:15:44.480 "data_size": 65536 00:15:44.480 } 00:15:44.480 ] 00:15:44.480 }' 00:15:44.480 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.480 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.059 [2024-11-06 12:46:33.508998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.059 "name": "Existed_Raid", 00:15:45.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.059 "strip_size_kb": 64, 00:15:45.059 "state": "configuring", 00:15:45.059 "raid_level": "raid5f", 00:15:45.059 "superblock": false, 00:15:45.059 "num_base_bdevs": 3, 00:15:45.059 "num_base_bdevs_discovered": 2, 00:15:45.059 "num_base_bdevs_operational": 3, 00:15:45.059 "base_bdevs_list": [ 00:15:45.059 { 00:15:45.059 "name": null, 00:15:45.059 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:45.059 "is_configured": false, 00:15:45.059 "data_offset": 0, 00:15:45.059 "data_size": 65536 00:15:45.059 }, 00:15:45.059 { 00:15:45.059 "name": "BaseBdev2", 00:15:45.059 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:45.059 "is_configured": true, 00:15:45.059 "data_offset": 0, 00:15:45.059 "data_size": 65536 00:15:45.059 }, 00:15:45.059 { 00:15:45.059 "name": "BaseBdev3", 00:15:45.059 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:45.059 "is_configured": true, 00:15:45.059 "data_offset": 0, 00:15:45.059 "data_size": 65536 00:15:45.059 } 00:15:45.059 ] 00:15:45.059 }' 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.059 12:46:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1eba59b0-e5f4-45cb-8fc1-27cf959c27be 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.626 [2024-11-06 12:46:34.191045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:45.626 [2024-11-06 12:46:34.191131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:45.626 [2024-11-06 12:46:34.191152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:45.626 [2024-11-06 12:46:34.191543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:45.626 [2024-11-06 12:46:34.196498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:45.626 [2024-11-06 12:46:34.196535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:45.626 [2024-11-06 12:46:34.196891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.626 NewBaseBdev 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.626 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.626 [ 00:15:45.626 { 00:15:45.626 "name": "NewBaseBdev", 00:15:45.626 "aliases": [ 00:15:45.626 "1eba59b0-e5f4-45cb-8fc1-27cf959c27be" 00:15:45.626 ], 00:15:45.626 "product_name": "Malloc disk", 00:15:45.626 "block_size": 512, 00:15:45.626 "num_blocks": 65536, 00:15:45.626 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:45.627 "assigned_rate_limits": { 00:15:45.627 "rw_ios_per_sec": 0, 00:15:45.627 "rw_mbytes_per_sec": 0, 00:15:45.627 "r_mbytes_per_sec": 0, 00:15:45.627 "w_mbytes_per_sec": 0 00:15:45.627 }, 00:15:45.627 "claimed": true, 00:15:45.627 "claim_type": "exclusive_write", 00:15:45.627 "zoned": false, 00:15:45.627 "supported_io_types": { 00:15:45.627 "read": true, 00:15:45.627 "write": true, 00:15:45.627 "unmap": true, 00:15:45.627 "flush": true, 00:15:45.627 "reset": true, 00:15:45.627 "nvme_admin": false, 00:15:45.627 "nvme_io": false, 00:15:45.627 "nvme_io_md": false, 00:15:45.627 "write_zeroes": true, 00:15:45.627 "zcopy": true, 00:15:45.627 "get_zone_info": false, 00:15:45.627 "zone_management": false, 00:15:45.627 "zone_append": false, 00:15:45.627 "compare": false, 00:15:45.627 "compare_and_write": false, 00:15:45.627 "abort": true, 00:15:45.627 "seek_hole": false, 00:15:45.627 "seek_data": false, 00:15:45.627 "copy": true, 00:15:45.627 "nvme_iov_md": false 00:15:45.627 }, 00:15:45.627 "memory_domains": [ 00:15:45.627 { 00:15:45.627 "dma_device_id": "system", 00:15:45.627 "dma_device_type": 1 00:15:45.627 }, 00:15:45.627 { 00:15:45.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.627 "dma_device_type": 2 00:15:45.627 } 00:15:45.627 ], 00:15:45.627 "driver_specific": {} 00:15:45.627 } 00:15:45.627 ] 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.627 "name": "Existed_Raid", 00:15:45.627 "uuid": "28b96d24-1bf1-419e-9928-0cfe61f28c71", 00:15:45.627 "strip_size_kb": 64, 00:15:45.627 "state": "online", 00:15:45.627 "raid_level": "raid5f", 00:15:45.627 "superblock": false, 00:15:45.627 "num_base_bdevs": 3, 00:15:45.627 "num_base_bdevs_discovered": 3, 00:15:45.627 "num_base_bdevs_operational": 3, 00:15:45.627 "base_bdevs_list": [ 00:15:45.627 { 00:15:45.627 "name": "NewBaseBdev", 00:15:45.627 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:45.627 "is_configured": true, 00:15:45.627 "data_offset": 0, 00:15:45.627 "data_size": 65536 00:15:45.627 }, 00:15:45.627 { 00:15:45.627 "name": "BaseBdev2", 00:15:45.627 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:45.627 "is_configured": true, 00:15:45.627 "data_offset": 0, 00:15:45.627 "data_size": 65536 00:15:45.627 }, 00:15:45.627 { 00:15:45.627 "name": "BaseBdev3", 00:15:45.627 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:45.627 "is_configured": true, 00:15:45.627 "data_offset": 0, 00:15:45.627 "data_size": 65536 00:15:45.627 } 00:15:45.627 ] 00:15:45.627 }' 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.627 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.194 [2024-11-06 12:46:34.738812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.194 "name": "Existed_Raid", 00:15:46.194 "aliases": [ 00:15:46.194 "28b96d24-1bf1-419e-9928-0cfe61f28c71" 00:15:46.194 ], 00:15:46.194 "product_name": "Raid Volume", 00:15:46.194 "block_size": 512, 00:15:46.194 "num_blocks": 131072, 00:15:46.194 "uuid": "28b96d24-1bf1-419e-9928-0cfe61f28c71", 00:15:46.194 "assigned_rate_limits": { 00:15:46.194 "rw_ios_per_sec": 0, 00:15:46.194 "rw_mbytes_per_sec": 0, 00:15:46.194 "r_mbytes_per_sec": 0, 00:15:46.194 "w_mbytes_per_sec": 0 00:15:46.194 }, 00:15:46.194 "claimed": false, 00:15:46.194 "zoned": false, 00:15:46.194 "supported_io_types": { 00:15:46.194 "read": true, 00:15:46.194 "write": true, 00:15:46.194 "unmap": false, 00:15:46.194 "flush": false, 00:15:46.194 "reset": true, 00:15:46.194 "nvme_admin": false, 00:15:46.194 "nvme_io": false, 00:15:46.194 "nvme_io_md": false, 00:15:46.194 "write_zeroes": true, 00:15:46.194 "zcopy": false, 00:15:46.194 "get_zone_info": false, 00:15:46.194 "zone_management": false, 00:15:46.194 "zone_append": false, 00:15:46.194 "compare": false, 00:15:46.194 "compare_and_write": false, 00:15:46.194 "abort": false, 00:15:46.194 "seek_hole": false, 00:15:46.194 "seek_data": false, 00:15:46.194 "copy": false, 00:15:46.194 "nvme_iov_md": false 00:15:46.194 }, 00:15:46.194 "driver_specific": { 00:15:46.194 "raid": { 00:15:46.194 "uuid": "28b96d24-1bf1-419e-9928-0cfe61f28c71", 00:15:46.194 "strip_size_kb": 64, 00:15:46.194 "state": "online", 00:15:46.194 "raid_level": "raid5f", 00:15:46.194 "superblock": false, 00:15:46.194 "num_base_bdevs": 3, 00:15:46.194 "num_base_bdevs_discovered": 3, 00:15:46.194 "num_base_bdevs_operational": 3, 00:15:46.194 "base_bdevs_list": [ 00:15:46.194 { 00:15:46.194 "name": "NewBaseBdev", 00:15:46.194 "uuid": "1eba59b0-e5f4-45cb-8fc1-27cf959c27be", 00:15:46.194 "is_configured": true, 00:15:46.194 "data_offset": 0, 00:15:46.194 "data_size": 65536 00:15:46.194 }, 00:15:46.194 { 00:15:46.194 "name": "BaseBdev2", 00:15:46.194 "uuid": "6ff03b2f-d50c-4bc5-b813-e0d0b3fd7f1a", 00:15:46.194 "is_configured": true, 00:15:46.194 "data_offset": 0, 00:15:46.194 "data_size": 65536 00:15:46.194 }, 00:15:46.194 { 00:15:46.194 "name": "BaseBdev3", 00:15:46.194 "uuid": "2804b6e5-85f0-4730-93cc-42504e0ad9e1", 00:15:46.194 "is_configured": true, 00:15:46.194 "data_offset": 0, 00:15:46.194 "data_size": 65536 00:15:46.194 } 00:15:46.194 ] 00:15:46.194 } 00:15:46.194 } 00:15:46.194 }' 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:46.194 BaseBdev2 00:15:46.194 BaseBdev3' 00:15:46.194 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 12:46:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 [2024-11-06 12:46:35.054633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.453 [2024-11-06 12:46:35.054674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.453 [2024-11-06 12:46:35.054768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.453 [2024-11-06 12:46:35.055123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.453 [2024-11-06 12:46:35.055160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80315 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80315 ']' 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80315 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80315 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.453 killing process with pid 80315 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80315' 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80315 00:15:46.453 [2024-11-06 12:46:35.097612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.453 12:46:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80315 00:15:46.712 [2024-11-06 12:46:35.348212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:48.088 00:15:48.088 real 0m11.564s 00:15:48.088 user 0m19.091s 00:15:48.088 sys 0m1.676s 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 ************************************ 00:15:48.088 END TEST raid5f_state_function_test 00:15:48.088 ************************************ 00:15:48.088 12:46:36 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:48.088 12:46:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:48.088 12:46:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:48.088 12:46:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 ************************************ 00:15:48.088 START TEST raid5f_state_function_test_sb 00:15:48.088 ************************************ 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80942 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.088 Process raid pid: 80942 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80942' 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80942 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80942 ']' 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:48.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:48.088 12:46:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 [2024-11-06 12:46:36.556501] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:15:48.088 [2024-11-06 12:46:36.556692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.350 [2024-11-06 12:46:36.759992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.350 [2024-11-06 12:46:36.889883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.608 [2024-11-06 12:46:37.099112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.608 [2024-11-06 12:46:37.099178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.175 [2024-11-06 12:46:37.542177] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.175 [2024-11-06 12:46:37.542309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.175 [2024-11-06 12:46:37.542331] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.175 [2024-11-06 12:46:37.542352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.175 [2024-11-06 12:46:37.542364] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.175 [2024-11-06 12:46:37.542382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.175 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.176 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.176 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.176 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.176 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.176 "name": "Existed_Raid", 00:15:49.176 "uuid": "85e14971-8403-4ab8-aa9b-0fa8b3ef55b3", 00:15:49.176 "strip_size_kb": 64, 00:15:49.176 "state": "configuring", 00:15:49.176 "raid_level": "raid5f", 00:15:49.176 "superblock": true, 00:15:49.176 "num_base_bdevs": 3, 00:15:49.176 "num_base_bdevs_discovered": 0, 00:15:49.176 "num_base_bdevs_operational": 3, 00:15:49.176 "base_bdevs_list": [ 00:15:49.176 { 00:15:49.176 "name": "BaseBdev1", 00:15:49.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.176 "is_configured": false, 00:15:49.176 "data_offset": 0, 00:15:49.176 "data_size": 0 00:15:49.176 }, 00:15:49.176 { 00:15:49.176 "name": "BaseBdev2", 00:15:49.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.176 "is_configured": false, 00:15:49.176 "data_offset": 0, 00:15:49.176 "data_size": 0 00:15:49.176 }, 00:15:49.176 { 00:15:49.176 "name": "BaseBdev3", 00:15:49.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.176 "is_configured": false, 00:15:49.176 "data_offset": 0, 00:15:49.176 "data_size": 0 00:15:49.176 } 00:15:49.176 ] 00:15:49.176 }' 00:15:49.176 12:46:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.176 12:46:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.434 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.434 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.434 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.692 [2024-11-06 12:46:38.090053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.692 [2024-11-06 12:46:38.090109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.692 [2024-11-06 12:46:38.098065] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.692 [2024-11-06 12:46:38.098144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.692 [2024-11-06 12:46:38.098163] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.692 [2024-11-06 12:46:38.098183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.692 [2024-11-06 12:46:38.098212] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.692 [2024-11-06 12:46:38.098233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.692 [2024-11-06 12:46:38.143151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.692 BaseBdev1 00:15:49.692 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.693 [ 00:15:49.693 { 00:15:49.693 "name": "BaseBdev1", 00:15:49.693 "aliases": [ 00:15:49.693 "6bf25a83-ae91-4fe3-a774-0b49d3924ad1" 00:15:49.693 ], 00:15:49.693 "product_name": "Malloc disk", 00:15:49.693 "block_size": 512, 00:15:49.693 "num_blocks": 65536, 00:15:49.693 "uuid": "6bf25a83-ae91-4fe3-a774-0b49d3924ad1", 00:15:49.693 "assigned_rate_limits": { 00:15:49.693 "rw_ios_per_sec": 0, 00:15:49.693 "rw_mbytes_per_sec": 0, 00:15:49.693 "r_mbytes_per_sec": 0, 00:15:49.693 "w_mbytes_per_sec": 0 00:15:49.693 }, 00:15:49.693 "claimed": true, 00:15:49.693 "claim_type": "exclusive_write", 00:15:49.693 "zoned": false, 00:15:49.693 "supported_io_types": { 00:15:49.693 "read": true, 00:15:49.693 "write": true, 00:15:49.693 "unmap": true, 00:15:49.693 "flush": true, 00:15:49.693 "reset": true, 00:15:49.693 "nvme_admin": false, 00:15:49.693 "nvme_io": false, 00:15:49.693 "nvme_io_md": false, 00:15:49.693 "write_zeroes": true, 00:15:49.693 "zcopy": true, 00:15:49.693 "get_zone_info": false, 00:15:49.693 "zone_management": false, 00:15:49.693 "zone_append": false, 00:15:49.693 "compare": false, 00:15:49.693 "compare_and_write": false, 00:15:49.693 "abort": true, 00:15:49.693 "seek_hole": false, 00:15:49.693 "seek_data": false, 00:15:49.693 "copy": true, 00:15:49.693 "nvme_iov_md": false 00:15:49.693 }, 00:15:49.693 "memory_domains": [ 00:15:49.693 { 00:15:49.693 "dma_device_id": "system", 00:15:49.693 "dma_device_type": 1 00:15:49.693 }, 00:15:49.693 { 00:15:49.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.693 "dma_device_type": 2 00:15:49.693 } 00:15:49.693 ], 00:15:49.693 "driver_specific": {} 00:15:49.693 } 00:15:49.693 ] 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.693 "name": "Existed_Raid", 00:15:49.693 "uuid": "dc21ce64-212c-43f3-af22-5bea992a6476", 00:15:49.693 "strip_size_kb": 64, 00:15:49.693 "state": "configuring", 00:15:49.693 "raid_level": "raid5f", 00:15:49.693 "superblock": true, 00:15:49.693 "num_base_bdevs": 3, 00:15:49.693 "num_base_bdevs_discovered": 1, 00:15:49.693 "num_base_bdevs_operational": 3, 00:15:49.693 "base_bdevs_list": [ 00:15:49.693 { 00:15:49.693 "name": "BaseBdev1", 00:15:49.693 "uuid": "6bf25a83-ae91-4fe3-a774-0b49d3924ad1", 00:15:49.693 "is_configured": true, 00:15:49.693 "data_offset": 2048, 00:15:49.693 "data_size": 63488 00:15:49.693 }, 00:15:49.693 { 00:15:49.693 "name": "BaseBdev2", 00:15:49.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.693 "is_configured": false, 00:15:49.693 "data_offset": 0, 00:15:49.693 "data_size": 0 00:15:49.693 }, 00:15:49.693 { 00:15:49.693 "name": "BaseBdev3", 00:15:49.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.693 "is_configured": false, 00:15:49.693 "data_offset": 0, 00:15:49.693 "data_size": 0 00:15:49.693 } 00:15:49.693 ] 00:15:49.693 }' 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.693 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.262 [2024-11-06 12:46:38.687411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.262 [2024-11-06 12:46:38.687492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.262 [2024-11-06 12:46:38.695452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.262 [2024-11-06 12:46:38.697906] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.262 [2024-11-06 12:46:38.697988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.262 [2024-11-06 12:46:38.698009] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.262 [2024-11-06 12:46:38.698029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.262 "name": "Existed_Raid", 00:15:50.262 "uuid": "41ff530f-1be9-4b7e-8983-5e17951c59f0", 00:15:50.262 "strip_size_kb": 64, 00:15:50.262 "state": "configuring", 00:15:50.262 "raid_level": "raid5f", 00:15:50.262 "superblock": true, 00:15:50.262 "num_base_bdevs": 3, 00:15:50.262 "num_base_bdevs_discovered": 1, 00:15:50.262 "num_base_bdevs_operational": 3, 00:15:50.262 "base_bdevs_list": [ 00:15:50.262 { 00:15:50.262 "name": "BaseBdev1", 00:15:50.262 "uuid": "6bf25a83-ae91-4fe3-a774-0b49d3924ad1", 00:15:50.262 "is_configured": true, 00:15:50.262 "data_offset": 2048, 00:15:50.262 "data_size": 63488 00:15:50.262 }, 00:15:50.262 { 00:15:50.262 "name": "BaseBdev2", 00:15:50.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.262 "is_configured": false, 00:15:50.262 "data_offset": 0, 00:15:50.262 "data_size": 0 00:15:50.262 }, 00:15:50.262 { 00:15:50.262 "name": "BaseBdev3", 00:15:50.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.262 "is_configured": false, 00:15:50.262 "data_offset": 0, 00:15:50.262 "data_size": 0 00:15:50.262 } 00:15:50.262 ] 00:15:50.262 }' 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.262 12:46:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 [2024-11-06 12:46:39.294352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.830 BaseBdev2 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.830 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.830 [ 00:15:50.830 { 00:15:50.830 "name": "BaseBdev2", 00:15:50.830 "aliases": [ 00:15:50.830 "f9a5a63a-4115-4194-a373-77581efbe472" 00:15:50.830 ], 00:15:50.830 "product_name": "Malloc disk", 00:15:50.830 "block_size": 512, 00:15:50.830 "num_blocks": 65536, 00:15:50.830 "uuid": "f9a5a63a-4115-4194-a373-77581efbe472", 00:15:50.830 "assigned_rate_limits": { 00:15:50.830 "rw_ios_per_sec": 0, 00:15:50.830 "rw_mbytes_per_sec": 0, 00:15:50.830 "r_mbytes_per_sec": 0, 00:15:50.830 "w_mbytes_per_sec": 0 00:15:50.830 }, 00:15:50.830 "claimed": true, 00:15:50.830 "claim_type": "exclusive_write", 00:15:50.830 "zoned": false, 00:15:50.830 "supported_io_types": { 00:15:50.830 "read": true, 00:15:50.830 "write": true, 00:15:50.830 "unmap": true, 00:15:50.830 "flush": true, 00:15:50.830 "reset": true, 00:15:50.830 "nvme_admin": false, 00:15:50.830 "nvme_io": false, 00:15:50.830 "nvme_io_md": false, 00:15:50.830 "write_zeroes": true, 00:15:50.830 "zcopy": true, 00:15:50.830 "get_zone_info": false, 00:15:50.830 "zone_management": false, 00:15:50.830 "zone_append": false, 00:15:50.830 "compare": false, 00:15:50.830 "compare_and_write": false, 00:15:50.830 "abort": true, 00:15:50.830 "seek_hole": false, 00:15:50.830 "seek_data": false, 00:15:50.830 "copy": true, 00:15:50.830 "nvme_iov_md": false 00:15:50.830 }, 00:15:50.830 "memory_domains": [ 00:15:50.830 { 00:15:50.830 "dma_device_id": "system", 00:15:50.830 "dma_device_type": 1 00:15:50.830 }, 00:15:50.830 { 00:15:50.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.830 "dma_device_type": 2 00:15:50.831 } 00:15:50.831 ], 00:15:50.831 "driver_specific": {} 00:15:50.831 } 00:15:50.831 ] 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.831 "name": "Existed_Raid", 00:15:50.831 "uuid": "41ff530f-1be9-4b7e-8983-5e17951c59f0", 00:15:50.831 "strip_size_kb": 64, 00:15:50.831 "state": "configuring", 00:15:50.831 "raid_level": "raid5f", 00:15:50.831 "superblock": true, 00:15:50.831 "num_base_bdevs": 3, 00:15:50.831 "num_base_bdevs_discovered": 2, 00:15:50.831 "num_base_bdevs_operational": 3, 00:15:50.831 "base_bdevs_list": [ 00:15:50.831 { 00:15:50.831 "name": "BaseBdev1", 00:15:50.831 "uuid": "6bf25a83-ae91-4fe3-a774-0b49d3924ad1", 00:15:50.831 "is_configured": true, 00:15:50.831 "data_offset": 2048, 00:15:50.831 "data_size": 63488 00:15:50.831 }, 00:15:50.831 { 00:15:50.831 "name": "BaseBdev2", 00:15:50.831 "uuid": "f9a5a63a-4115-4194-a373-77581efbe472", 00:15:50.831 "is_configured": true, 00:15:50.831 "data_offset": 2048, 00:15:50.831 "data_size": 63488 00:15:50.831 }, 00:15:50.831 { 00:15:50.831 "name": "BaseBdev3", 00:15:50.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.831 "is_configured": false, 00:15:50.831 "data_offset": 0, 00:15:50.831 "data_size": 0 00:15:50.831 } 00:15:50.831 ] 00:15:50.831 }' 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.831 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.399 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.399 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.399 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.399 [2024-11-06 12:46:39.940041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.400 BaseBdev3 00:15:51.400 [2024-11-06 12:46:39.940647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.400 [2024-11-06 12:46:39.940690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.400 [2024-11-06 12:46:39.941050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.400 [2024-11-06 12:46:39.946433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.400 [2024-11-06 12:46:39.946464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:51.400 [2024-11-06 12:46:39.946829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.400 [ 00:15:51.400 { 00:15:51.400 "name": "BaseBdev3", 00:15:51.400 "aliases": [ 00:15:51.400 "380b6465-70e1-42d1-8fc1-b3250305e11c" 00:15:51.400 ], 00:15:51.400 "product_name": "Malloc disk", 00:15:51.400 "block_size": 512, 00:15:51.400 "num_blocks": 65536, 00:15:51.400 "uuid": "380b6465-70e1-42d1-8fc1-b3250305e11c", 00:15:51.400 "assigned_rate_limits": { 00:15:51.400 "rw_ios_per_sec": 0, 00:15:51.400 "rw_mbytes_per_sec": 0, 00:15:51.400 "r_mbytes_per_sec": 0, 00:15:51.400 "w_mbytes_per_sec": 0 00:15:51.400 }, 00:15:51.400 "claimed": true, 00:15:51.400 "claim_type": "exclusive_write", 00:15:51.400 "zoned": false, 00:15:51.400 "supported_io_types": { 00:15:51.400 "read": true, 00:15:51.400 "write": true, 00:15:51.400 "unmap": true, 00:15:51.400 "flush": true, 00:15:51.400 "reset": true, 00:15:51.400 "nvme_admin": false, 00:15:51.400 "nvme_io": false, 00:15:51.400 "nvme_io_md": false, 00:15:51.400 "write_zeroes": true, 00:15:51.400 "zcopy": true, 00:15:51.400 "get_zone_info": false, 00:15:51.400 "zone_management": false, 00:15:51.400 "zone_append": false, 00:15:51.400 "compare": false, 00:15:51.400 "compare_and_write": false, 00:15:51.400 "abort": true, 00:15:51.400 "seek_hole": false, 00:15:51.400 "seek_data": false, 00:15:51.400 "copy": true, 00:15:51.400 "nvme_iov_md": false 00:15:51.400 }, 00:15:51.400 "memory_domains": [ 00:15:51.400 { 00:15:51.400 "dma_device_id": "system", 00:15:51.400 "dma_device_type": 1 00:15:51.400 }, 00:15:51.400 { 00:15:51.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.400 "dma_device_type": 2 00:15:51.400 } 00:15:51.400 ], 00:15:51.400 "driver_specific": {} 00:15:51.400 } 00:15:51.400 ] 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.400 12:46:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.400 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.400 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.400 "name": "Existed_Raid", 00:15:51.400 "uuid": "41ff530f-1be9-4b7e-8983-5e17951c59f0", 00:15:51.400 "strip_size_kb": 64, 00:15:51.400 "state": "online", 00:15:51.400 "raid_level": "raid5f", 00:15:51.400 "superblock": true, 00:15:51.400 "num_base_bdevs": 3, 00:15:51.400 "num_base_bdevs_discovered": 3, 00:15:51.400 "num_base_bdevs_operational": 3, 00:15:51.400 "base_bdevs_list": [ 00:15:51.400 { 00:15:51.400 "name": "BaseBdev1", 00:15:51.400 "uuid": "6bf25a83-ae91-4fe3-a774-0b49d3924ad1", 00:15:51.400 "is_configured": true, 00:15:51.400 "data_offset": 2048, 00:15:51.400 "data_size": 63488 00:15:51.400 }, 00:15:51.400 { 00:15:51.400 "name": "BaseBdev2", 00:15:51.400 "uuid": "f9a5a63a-4115-4194-a373-77581efbe472", 00:15:51.400 "is_configured": true, 00:15:51.400 "data_offset": 2048, 00:15:51.400 "data_size": 63488 00:15:51.400 }, 00:15:51.400 { 00:15:51.400 "name": "BaseBdev3", 00:15:51.400 "uuid": "380b6465-70e1-42d1-8fc1-b3250305e11c", 00:15:51.400 "is_configured": true, 00:15:51.400 "data_offset": 2048, 00:15:51.400 "data_size": 63488 00:15:51.400 } 00:15:51.400 ] 00:15:51.400 }' 00:15:51.400 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.400 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.967 [2024-11-06 12:46:40.488993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.967 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.967 "name": "Existed_Raid", 00:15:51.967 "aliases": [ 00:15:51.967 "41ff530f-1be9-4b7e-8983-5e17951c59f0" 00:15:51.967 ], 00:15:51.967 "product_name": "Raid Volume", 00:15:51.967 "block_size": 512, 00:15:51.967 "num_blocks": 126976, 00:15:51.967 "uuid": "41ff530f-1be9-4b7e-8983-5e17951c59f0", 00:15:51.967 "assigned_rate_limits": { 00:15:51.967 "rw_ios_per_sec": 0, 00:15:51.967 "rw_mbytes_per_sec": 0, 00:15:51.967 "r_mbytes_per_sec": 0, 00:15:51.967 "w_mbytes_per_sec": 0 00:15:51.967 }, 00:15:51.967 "claimed": false, 00:15:51.967 "zoned": false, 00:15:51.967 "supported_io_types": { 00:15:51.967 "read": true, 00:15:51.967 "write": true, 00:15:51.967 "unmap": false, 00:15:51.967 "flush": false, 00:15:51.967 "reset": true, 00:15:51.967 "nvme_admin": false, 00:15:51.967 "nvme_io": false, 00:15:51.967 "nvme_io_md": false, 00:15:51.967 "write_zeroes": true, 00:15:51.967 "zcopy": false, 00:15:51.967 "get_zone_info": false, 00:15:51.967 "zone_management": false, 00:15:51.967 "zone_append": false, 00:15:51.967 "compare": false, 00:15:51.967 "compare_and_write": false, 00:15:51.967 "abort": false, 00:15:51.967 "seek_hole": false, 00:15:51.967 "seek_data": false, 00:15:51.967 "copy": false, 00:15:51.968 "nvme_iov_md": false 00:15:51.968 }, 00:15:51.968 "driver_specific": { 00:15:51.968 "raid": { 00:15:51.968 "uuid": "41ff530f-1be9-4b7e-8983-5e17951c59f0", 00:15:51.968 "strip_size_kb": 64, 00:15:51.968 "state": "online", 00:15:51.968 "raid_level": "raid5f", 00:15:51.968 "superblock": true, 00:15:51.968 "num_base_bdevs": 3, 00:15:51.968 "num_base_bdevs_discovered": 3, 00:15:51.968 "num_base_bdevs_operational": 3, 00:15:51.968 "base_bdevs_list": [ 00:15:51.968 { 00:15:51.968 "name": "BaseBdev1", 00:15:51.968 "uuid": "6bf25a83-ae91-4fe3-a774-0b49d3924ad1", 00:15:51.968 "is_configured": true, 00:15:51.968 "data_offset": 2048, 00:15:51.968 "data_size": 63488 00:15:51.968 }, 00:15:51.968 { 00:15:51.968 "name": "BaseBdev2", 00:15:51.968 "uuid": "f9a5a63a-4115-4194-a373-77581efbe472", 00:15:51.968 "is_configured": true, 00:15:51.968 "data_offset": 2048, 00:15:51.968 "data_size": 63488 00:15:51.968 }, 00:15:51.968 { 00:15:51.968 "name": "BaseBdev3", 00:15:51.968 "uuid": "380b6465-70e1-42d1-8fc1-b3250305e11c", 00:15:51.968 "is_configured": true, 00:15:51.968 "data_offset": 2048, 00:15:51.968 "data_size": 63488 00:15:51.968 } 00:15:51.968 ] 00:15:51.968 } 00:15:51.968 } 00:15:51.968 }' 00:15:51.968 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.968 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:51.968 BaseBdev2 00:15:51.968 BaseBdev3' 00:15:51.968 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.968 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.968 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 [2024-11-06 12:46:40.776811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.485 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.485 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.485 "name": "Existed_Raid", 00:15:52.485 "uuid": "41ff530f-1be9-4b7e-8983-5e17951c59f0", 00:15:52.485 "strip_size_kb": 64, 00:15:52.485 "state": "online", 00:15:52.485 "raid_level": "raid5f", 00:15:52.485 "superblock": true, 00:15:52.485 "num_base_bdevs": 3, 00:15:52.485 "num_base_bdevs_discovered": 2, 00:15:52.485 "num_base_bdevs_operational": 2, 00:15:52.485 "base_bdevs_list": [ 00:15:52.485 { 00:15:52.485 "name": null, 00:15:52.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.485 "is_configured": false, 00:15:52.485 "data_offset": 0, 00:15:52.485 "data_size": 63488 00:15:52.485 }, 00:15:52.485 { 00:15:52.485 "name": "BaseBdev2", 00:15:52.485 "uuid": "f9a5a63a-4115-4194-a373-77581efbe472", 00:15:52.485 "is_configured": true, 00:15:52.485 "data_offset": 2048, 00:15:52.485 "data_size": 63488 00:15:52.485 }, 00:15:52.485 { 00:15:52.485 "name": "BaseBdev3", 00:15:52.485 "uuid": "380b6465-70e1-42d1-8fc1-b3250305e11c", 00:15:52.485 "is_configured": true, 00:15:52.485 "data_offset": 2048, 00:15:52.485 "data_size": 63488 00:15:52.485 } 00:15:52.485 ] 00:15:52.485 }' 00:15:52.485 12:46:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.485 12:46:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:52.744 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.002 [2024-11-06 12:46:41.425892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.002 [2024-11-06 12:46:41.426115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.002 [2024-11-06 12:46:41.511727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.002 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.002 [2024-11-06 12:46:41.583715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.002 [2024-11-06 12:46:41.583772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.261 BaseBdev2 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.261 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.261 [ 00:15:53.261 { 00:15:53.261 "name": "BaseBdev2", 00:15:53.261 "aliases": [ 00:15:53.261 "27f8dffb-47c3-42b6-8837-71f9249620de" 00:15:53.261 ], 00:15:53.261 "product_name": "Malloc disk", 00:15:53.261 "block_size": 512, 00:15:53.261 "num_blocks": 65536, 00:15:53.261 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:53.261 "assigned_rate_limits": { 00:15:53.261 "rw_ios_per_sec": 0, 00:15:53.261 "rw_mbytes_per_sec": 0, 00:15:53.261 "r_mbytes_per_sec": 0, 00:15:53.261 "w_mbytes_per_sec": 0 00:15:53.261 }, 00:15:53.261 "claimed": false, 00:15:53.261 "zoned": false, 00:15:53.261 "supported_io_types": { 00:15:53.261 "read": true, 00:15:53.261 "write": true, 00:15:53.261 "unmap": true, 00:15:53.261 "flush": true, 00:15:53.261 "reset": true, 00:15:53.261 "nvme_admin": false, 00:15:53.261 "nvme_io": false, 00:15:53.261 "nvme_io_md": false, 00:15:53.261 "write_zeroes": true, 00:15:53.261 "zcopy": true, 00:15:53.261 "get_zone_info": false, 00:15:53.261 "zone_management": false, 00:15:53.262 "zone_append": false, 00:15:53.262 "compare": false, 00:15:53.262 "compare_and_write": false, 00:15:53.262 "abort": true, 00:15:53.262 "seek_hole": false, 00:15:53.262 "seek_data": false, 00:15:53.262 "copy": true, 00:15:53.262 "nvme_iov_md": false 00:15:53.262 }, 00:15:53.262 "memory_domains": [ 00:15:53.262 { 00:15:53.262 "dma_device_id": "system", 00:15:53.262 "dma_device_type": 1 00:15:53.262 }, 00:15:53.262 { 00:15:53.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.262 "dma_device_type": 2 00:15:53.262 } 00:15:53.262 ], 00:15:53.262 "driver_specific": {} 00:15:53.262 } 00:15:53.262 ] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.262 BaseBdev3 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.262 [ 00:15:53.262 { 00:15:53.262 "name": "BaseBdev3", 00:15:53.262 "aliases": [ 00:15:53.262 "eff8686f-9b22-47ce-9c4c-a0e1efd85545" 00:15:53.262 ], 00:15:53.262 "product_name": "Malloc disk", 00:15:53.262 "block_size": 512, 00:15:53.262 "num_blocks": 65536, 00:15:53.262 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:53.262 "assigned_rate_limits": { 00:15:53.262 "rw_ios_per_sec": 0, 00:15:53.262 "rw_mbytes_per_sec": 0, 00:15:53.262 "r_mbytes_per_sec": 0, 00:15:53.262 "w_mbytes_per_sec": 0 00:15:53.262 }, 00:15:53.262 "claimed": false, 00:15:53.262 "zoned": false, 00:15:53.262 "supported_io_types": { 00:15:53.262 "read": true, 00:15:53.262 "write": true, 00:15:53.262 "unmap": true, 00:15:53.262 "flush": true, 00:15:53.262 "reset": true, 00:15:53.262 "nvme_admin": false, 00:15:53.262 "nvme_io": false, 00:15:53.262 "nvme_io_md": false, 00:15:53.262 "write_zeroes": true, 00:15:53.262 "zcopy": true, 00:15:53.262 "get_zone_info": false, 00:15:53.262 "zone_management": false, 00:15:53.262 "zone_append": false, 00:15:53.262 "compare": false, 00:15:53.262 "compare_and_write": false, 00:15:53.262 "abort": true, 00:15:53.262 "seek_hole": false, 00:15:53.262 "seek_data": false, 00:15:53.262 "copy": true, 00:15:53.262 "nvme_iov_md": false 00:15:53.262 }, 00:15:53.262 "memory_domains": [ 00:15:53.262 { 00:15:53.262 "dma_device_id": "system", 00:15:53.262 "dma_device_type": 1 00:15:53.262 }, 00:15:53.262 { 00:15:53.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.262 "dma_device_type": 2 00:15:53.262 } 00:15:53.262 ], 00:15:53.262 "driver_specific": {} 00:15:53.262 } 00:15:53.262 ] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.262 [2024-11-06 12:46:41.884332] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.262 [2024-11-06 12:46:41.884513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.262 [2024-11-06 12:46:41.884644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.262 [2024-11-06 12:46:41.887134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.262 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.520 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.520 "name": "Existed_Raid", 00:15:53.520 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:53.520 "strip_size_kb": 64, 00:15:53.520 "state": "configuring", 00:15:53.520 "raid_level": "raid5f", 00:15:53.520 "superblock": true, 00:15:53.520 "num_base_bdevs": 3, 00:15:53.520 "num_base_bdevs_discovered": 2, 00:15:53.520 "num_base_bdevs_operational": 3, 00:15:53.520 "base_bdevs_list": [ 00:15:53.520 { 00:15:53.520 "name": "BaseBdev1", 00:15:53.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.520 "is_configured": false, 00:15:53.520 "data_offset": 0, 00:15:53.520 "data_size": 0 00:15:53.520 }, 00:15:53.520 { 00:15:53.520 "name": "BaseBdev2", 00:15:53.520 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:53.520 "is_configured": true, 00:15:53.520 "data_offset": 2048, 00:15:53.520 "data_size": 63488 00:15:53.520 }, 00:15:53.520 { 00:15:53.520 "name": "BaseBdev3", 00:15:53.520 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:53.520 "is_configured": true, 00:15:53.520 "data_offset": 2048, 00:15:53.520 "data_size": 63488 00:15:53.520 } 00:15:53.520 ] 00:15:53.520 }' 00:15:53.520 12:46:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.520 12:46:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.779 [2024-11-06 12:46:42.388487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.779 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.037 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.037 "name": "Existed_Raid", 00:15:54.037 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:54.037 "strip_size_kb": 64, 00:15:54.037 "state": "configuring", 00:15:54.037 "raid_level": "raid5f", 00:15:54.037 "superblock": true, 00:15:54.037 "num_base_bdevs": 3, 00:15:54.038 "num_base_bdevs_discovered": 1, 00:15:54.038 "num_base_bdevs_operational": 3, 00:15:54.038 "base_bdevs_list": [ 00:15:54.038 { 00:15:54.038 "name": "BaseBdev1", 00:15:54.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.038 "is_configured": false, 00:15:54.038 "data_offset": 0, 00:15:54.038 "data_size": 0 00:15:54.038 }, 00:15:54.038 { 00:15:54.038 "name": null, 00:15:54.038 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:54.038 "is_configured": false, 00:15:54.038 "data_offset": 0, 00:15:54.038 "data_size": 63488 00:15:54.038 }, 00:15:54.038 { 00:15:54.038 "name": "BaseBdev3", 00:15:54.038 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:54.038 "is_configured": true, 00:15:54.038 "data_offset": 2048, 00:15:54.038 "data_size": 63488 00:15:54.038 } 00:15:54.038 ] 00:15:54.038 }' 00:15:54.038 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.038 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.296 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.296 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.296 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.296 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.296 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.555 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:54.555 12:46:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.555 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.555 12:46:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.555 [2024-11-06 12:46:43.001600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.555 BaseBdev1 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.555 [ 00:15:54.555 { 00:15:54.555 "name": "BaseBdev1", 00:15:54.555 "aliases": [ 00:15:54.555 "c6a1eccf-1457-47d6-a472-94497c640706" 00:15:54.555 ], 00:15:54.555 "product_name": "Malloc disk", 00:15:54.555 "block_size": 512, 00:15:54.555 "num_blocks": 65536, 00:15:54.555 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:54.555 "assigned_rate_limits": { 00:15:54.555 "rw_ios_per_sec": 0, 00:15:54.555 "rw_mbytes_per_sec": 0, 00:15:54.555 "r_mbytes_per_sec": 0, 00:15:54.555 "w_mbytes_per_sec": 0 00:15:54.555 }, 00:15:54.555 "claimed": true, 00:15:54.555 "claim_type": "exclusive_write", 00:15:54.555 "zoned": false, 00:15:54.555 "supported_io_types": { 00:15:54.555 "read": true, 00:15:54.555 "write": true, 00:15:54.555 "unmap": true, 00:15:54.555 "flush": true, 00:15:54.555 "reset": true, 00:15:54.555 "nvme_admin": false, 00:15:54.555 "nvme_io": false, 00:15:54.555 "nvme_io_md": false, 00:15:54.555 "write_zeroes": true, 00:15:54.555 "zcopy": true, 00:15:54.555 "get_zone_info": false, 00:15:54.555 "zone_management": false, 00:15:54.555 "zone_append": false, 00:15:54.555 "compare": false, 00:15:54.555 "compare_and_write": false, 00:15:54.555 "abort": true, 00:15:54.555 "seek_hole": false, 00:15:54.555 "seek_data": false, 00:15:54.555 "copy": true, 00:15:54.555 "nvme_iov_md": false 00:15:54.555 }, 00:15:54.555 "memory_domains": [ 00:15:54.555 { 00:15:54.555 "dma_device_id": "system", 00:15:54.555 "dma_device_type": 1 00:15:54.555 }, 00:15:54.555 { 00:15:54.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.555 "dma_device_type": 2 00:15:54.555 } 00:15:54.555 ], 00:15:54.555 "driver_specific": {} 00:15:54.555 } 00:15:54.555 ] 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:54.555 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.556 "name": "Existed_Raid", 00:15:54.556 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:54.556 "strip_size_kb": 64, 00:15:54.556 "state": "configuring", 00:15:54.556 "raid_level": "raid5f", 00:15:54.556 "superblock": true, 00:15:54.556 "num_base_bdevs": 3, 00:15:54.556 "num_base_bdevs_discovered": 2, 00:15:54.556 "num_base_bdevs_operational": 3, 00:15:54.556 "base_bdevs_list": [ 00:15:54.556 { 00:15:54.556 "name": "BaseBdev1", 00:15:54.556 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:54.556 "is_configured": true, 00:15:54.556 "data_offset": 2048, 00:15:54.556 "data_size": 63488 00:15:54.556 }, 00:15:54.556 { 00:15:54.556 "name": null, 00:15:54.556 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:54.556 "is_configured": false, 00:15:54.556 "data_offset": 0, 00:15:54.556 "data_size": 63488 00:15:54.556 }, 00:15:54.556 { 00:15:54.556 "name": "BaseBdev3", 00:15:54.556 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:54.556 "is_configured": true, 00:15:54.556 "data_offset": 2048, 00:15:54.556 "data_size": 63488 00:15:54.556 } 00:15:54.556 ] 00:15:54.556 }' 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.556 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 [2024-11-06 12:46:43.609898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.122 "name": "Existed_Raid", 00:15:55.122 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:55.122 "strip_size_kb": 64, 00:15:55.122 "state": "configuring", 00:15:55.122 "raid_level": "raid5f", 00:15:55.122 "superblock": true, 00:15:55.122 "num_base_bdevs": 3, 00:15:55.122 "num_base_bdevs_discovered": 1, 00:15:55.122 "num_base_bdevs_operational": 3, 00:15:55.122 "base_bdevs_list": [ 00:15:55.122 { 00:15:55.122 "name": "BaseBdev1", 00:15:55.122 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:55.122 "is_configured": true, 00:15:55.122 "data_offset": 2048, 00:15:55.122 "data_size": 63488 00:15:55.122 }, 00:15:55.122 { 00:15:55.122 "name": null, 00:15:55.122 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:55.122 "is_configured": false, 00:15:55.122 "data_offset": 0, 00:15:55.122 "data_size": 63488 00:15:55.122 }, 00:15:55.122 { 00:15:55.122 "name": null, 00:15:55.122 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:55.122 "is_configured": false, 00:15:55.122 "data_offset": 0, 00:15:55.122 "data_size": 63488 00:15:55.122 } 00:15:55.122 ] 00:15:55.122 }' 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.122 12:46:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.689 [2024-11-06 12:46:44.182089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.689 "name": "Existed_Raid", 00:15:55.689 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:55.689 "strip_size_kb": 64, 00:15:55.689 "state": "configuring", 00:15:55.689 "raid_level": "raid5f", 00:15:55.689 "superblock": true, 00:15:55.689 "num_base_bdevs": 3, 00:15:55.689 "num_base_bdevs_discovered": 2, 00:15:55.689 "num_base_bdevs_operational": 3, 00:15:55.689 "base_bdevs_list": [ 00:15:55.689 { 00:15:55.689 "name": "BaseBdev1", 00:15:55.689 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:55.689 "is_configured": true, 00:15:55.689 "data_offset": 2048, 00:15:55.689 "data_size": 63488 00:15:55.689 }, 00:15:55.689 { 00:15:55.689 "name": null, 00:15:55.689 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:55.689 "is_configured": false, 00:15:55.689 "data_offset": 0, 00:15:55.689 "data_size": 63488 00:15:55.689 }, 00:15:55.689 { 00:15:55.689 "name": "BaseBdev3", 00:15:55.689 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:55.689 "is_configured": true, 00:15:55.689 "data_offset": 2048, 00:15:55.689 "data_size": 63488 00:15:55.689 } 00:15:55.689 ] 00:15:55.689 }' 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.689 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.257 [2024-11-06 12:46:44.818407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.257 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.519 "name": "Existed_Raid", 00:15:56.519 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:56.519 "strip_size_kb": 64, 00:15:56.519 "state": "configuring", 00:15:56.519 "raid_level": "raid5f", 00:15:56.519 "superblock": true, 00:15:56.519 "num_base_bdevs": 3, 00:15:56.519 "num_base_bdevs_discovered": 1, 00:15:56.519 "num_base_bdevs_operational": 3, 00:15:56.519 "base_bdevs_list": [ 00:15:56.519 { 00:15:56.519 "name": null, 00:15:56.519 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:56.519 "is_configured": false, 00:15:56.519 "data_offset": 0, 00:15:56.519 "data_size": 63488 00:15:56.519 }, 00:15:56.519 { 00:15:56.519 "name": null, 00:15:56.519 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:56.519 "is_configured": false, 00:15:56.519 "data_offset": 0, 00:15:56.519 "data_size": 63488 00:15:56.519 }, 00:15:56.519 { 00:15:56.519 "name": "BaseBdev3", 00:15:56.519 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:56.519 "is_configured": true, 00:15:56.519 "data_offset": 2048, 00:15:56.519 "data_size": 63488 00:15:56.519 } 00:15:56.519 ] 00:15:56.519 }' 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.519 12:46:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.087 [2024-11-06 12:46:45.508250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.087 "name": "Existed_Raid", 00:15:57.087 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:57.087 "strip_size_kb": 64, 00:15:57.087 "state": "configuring", 00:15:57.087 "raid_level": "raid5f", 00:15:57.087 "superblock": true, 00:15:57.087 "num_base_bdevs": 3, 00:15:57.087 "num_base_bdevs_discovered": 2, 00:15:57.087 "num_base_bdevs_operational": 3, 00:15:57.087 "base_bdevs_list": [ 00:15:57.087 { 00:15:57.087 "name": null, 00:15:57.087 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:57.087 "is_configured": false, 00:15:57.087 "data_offset": 0, 00:15:57.087 "data_size": 63488 00:15:57.087 }, 00:15:57.087 { 00:15:57.087 "name": "BaseBdev2", 00:15:57.087 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:57.087 "is_configured": true, 00:15:57.087 "data_offset": 2048, 00:15:57.087 "data_size": 63488 00:15:57.087 }, 00:15:57.087 { 00:15:57.087 "name": "BaseBdev3", 00:15:57.087 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:57.087 "is_configured": true, 00:15:57.087 "data_offset": 2048, 00:15:57.087 "data_size": 63488 00:15:57.087 } 00:15:57.087 ] 00:15:57.087 }' 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.087 12:46:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6a1eccf-1457-47d6-a472-94497c640706 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 [2024-11-06 12:46:46.174887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:57.656 [2024-11-06 12:46:46.175220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:57.656 [2024-11-06 12:46:46.175245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:57.656 [2024-11-06 12:46:46.175570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:57.656 NewBaseBdev 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 [2024-11-06 12:46:46.180432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:57.656 [2024-11-06 12:46:46.180670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:57.656 [2024-11-06 12:46:46.180885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 [ 00:15:57.656 { 00:15:57.656 "name": "NewBaseBdev", 00:15:57.656 "aliases": [ 00:15:57.656 "c6a1eccf-1457-47d6-a472-94497c640706" 00:15:57.656 ], 00:15:57.656 "product_name": "Malloc disk", 00:15:57.656 "block_size": 512, 00:15:57.656 "num_blocks": 65536, 00:15:57.656 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:57.656 "assigned_rate_limits": { 00:15:57.656 "rw_ios_per_sec": 0, 00:15:57.656 "rw_mbytes_per_sec": 0, 00:15:57.656 "r_mbytes_per_sec": 0, 00:15:57.656 "w_mbytes_per_sec": 0 00:15:57.656 }, 00:15:57.656 "claimed": true, 00:15:57.656 "claim_type": "exclusive_write", 00:15:57.656 "zoned": false, 00:15:57.656 "supported_io_types": { 00:15:57.656 "read": true, 00:15:57.656 "write": true, 00:15:57.656 "unmap": true, 00:15:57.656 "flush": true, 00:15:57.656 "reset": true, 00:15:57.656 "nvme_admin": false, 00:15:57.656 "nvme_io": false, 00:15:57.656 "nvme_io_md": false, 00:15:57.656 "write_zeroes": true, 00:15:57.656 "zcopy": true, 00:15:57.656 "get_zone_info": false, 00:15:57.656 "zone_management": false, 00:15:57.656 "zone_append": false, 00:15:57.656 "compare": false, 00:15:57.656 "compare_and_write": false, 00:15:57.656 "abort": true, 00:15:57.656 "seek_hole": false, 00:15:57.656 "seek_data": false, 00:15:57.656 "copy": true, 00:15:57.656 "nvme_iov_md": false 00:15:57.656 }, 00:15:57.656 "memory_domains": [ 00:15:57.656 { 00:15:57.656 "dma_device_id": "system", 00:15:57.656 "dma_device_type": 1 00:15:57.656 }, 00:15:57.656 { 00:15:57.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.656 "dma_device_type": 2 00:15:57.656 } 00:15:57.656 ], 00:15:57.656 "driver_specific": {} 00:15:57.656 } 00:15:57.656 ] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.656 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.656 "name": "Existed_Raid", 00:15:57.656 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:57.656 "strip_size_kb": 64, 00:15:57.656 "state": "online", 00:15:57.656 "raid_level": "raid5f", 00:15:57.656 "superblock": true, 00:15:57.657 "num_base_bdevs": 3, 00:15:57.657 "num_base_bdevs_discovered": 3, 00:15:57.657 "num_base_bdevs_operational": 3, 00:15:57.657 "base_bdevs_list": [ 00:15:57.657 { 00:15:57.657 "name": "NewBaseBdev", 00:15:57.657 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:57.657 "is_configured": true, 00:15:57.657 "data_offset": 2048, 00:15:57.657 "data_size": 63488 00:15:57.657 }, 00:15:57.657 { 00:15:57.657 "name": "BaseBdev2", 00:15:57.657 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:57.657 "is_configured": true, 00:15:57.657 "data_offset": 2048, 00:15:57.657 "data_size": 63488 00:15:57.657 }, 00:15:57.657 { 00:15:57.657 "name": "BaseBdev3", 00:15:57.657 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:57.657 "is_configured": true, 00:15:57.657 "data_offset": 2048, 00:15:57.657 "data_size": 63488 00:15:57.657 } 00:15:57.657 ] 00:15:57.657 }' 00:15:57.657 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.657 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.279 [2024-11-06 12:46:46.730838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.279 "name": "Existed_Raid", 00:15:58.279 "aliases": [ 00:15:58.279 "78269082-dfc9-4c32-8e1e-a6d2c183a62d" 00:15:58.279 ], 00:15:58.279 "product_name": "Raid Volume", 00:15:58.279 "block_size": 512, 00:15:58.279 "num_blocks": 126976, 00:15:58.279 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:58.279 "assigned_rate_limits": { 00:15:58.279 "rw_ios_per_sec": 0, 00:15:58.279 "rw_mbytes_per_sec": 0, 00:15:58.279 "r_mbytes_per_sec": 0, 00:15:58.279 "w_mbytes_per_sec": 0 00:15:58.279 }, 00:15:58.279 "claimed": false, 00:15:58.279 "zoned": false, 00:15:58.279 "supported_io_types": { 00:15:58.279 "read": true, 00:15:58.279 "write": true, 00:15:58.279 "unmap": false, 00:15:58.279 "flush": false, 00:15:58.279 "reset": true, 00:15:58.279 "nvme_admin": false, 00:15:58.279 "nvme_io": false, 00:15:58.279 "nvme_io_md": false, 00:15:58.279 "write_zeroes": true, 00:15:58.279 "zcopy": false, 00:15:58.279 "get_zone_info": false, 00:15:58.279 "zone_management": false, 00:15:58.279 "zone_append": false, 00:15:58.279 "compare": false, 00:15:58.279 "compare_and_write": false, 00:15:58.279 "abort": false, 00:15:58.279 "seek_hole": false, 00:15:58.279 "seek_data": false, 00:15:58.279 "copy": false, 00:15:58.279 "nvme_iov_md": false 00:15:58.279 }, 00:15:58.279 "driver_specific": { 00:15:58.279 "raid": { 00:15:58.279 "uuid": "78269082-dfc9-4c32-8e1e-a6d2c183a62d", 00:15:58.279 "strip_size_kb": 64, 00:15:58.279 "state": "online", 00:15:58.279 "raid_level": "raid5f", 00:15:58.279 "superblock": true, 00:15:58.279 "num_base_bdevs": 3, 00:15:58.279 "num_base_bdevs_discovered": 3, 00:15:58.279 "num_base_bdevs_operational": 3, 00:15:58.279 "base_bdevs_list": [ 00:15:58.279 { 00:15:58.279 "name": "NewBaseBdev", 00:15:58.279 "uuid": "c6a1eccf-1457-47d6-a472-94497c640706", 00:15:58.279 "is_configured": true, 00:15:58.279 "data_offset": 2048, 00:15:58.279 "data_size": 63488 00:15:58.279 }, 00:15:58.279 { 00:15:58.279 "name": "BaseBdev2", 00:15:58.279 "uuid": "27f8dffb-47c3-42b6-8837-71f9249620de", 00:15:58.279 "is_configured": true, 00:15:58.279 "data_offset": 2048, 00:15:58.279 "data_size": 63488 00:15:58.279 }, 00:15:58.279 { 00:15:58.279 "name": "BaseBdev3", 00:15:58.279 "uuid": "eff8686f-9b22-47ce-9c4c-a0e1efd85545", 00:15:58.279 "is_configured": true, 00:15:58.279 "data_offset": 2048, 00:15:58.279 "data_size": 63488 00:15:58.279 } 00:15:58.279 ] 00:15:58.279 } 00:15:58.279 } 00:15:58.279 }' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:58.279 BaseBdev2 00:15:58.279 BaseBdev3' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.279 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.539 12:46:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.539 [2024-11-06 12:46:47.050658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.539 [2024-11-06 12:46:47.050705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.539 [2024-11-06 12:46:47.050797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.539 [2024-11-06 12:46:47.051188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.539 [2024-11-06 12:46:47.051209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80942 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80942 ']' 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80942 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80942 00:15:58.539 killing process with pid 80942 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80942' 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80942 00:15:58.539 [2024-11-06 12:46:47.090027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.539 12:46:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80942 00:15:58.798 [2024-11-06 12:46:47.345419] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.733 ************************************ 00:15:59.733 END TEST raid5f_state_function_test_sb 00:15:59.733 ************************************ 00:15:59.733 12:46:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:59.733 00:15:59.733 real 0m11.910s 00:15:59.733 user 0m19.716s 00:15:59.733 sys 0m1.740s 00:15:59.733 12:46:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:59.733 12:46:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.991 12:46:48 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:59.991 12:46:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:59.991 12:46:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:59.991 12:46:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.991 ************************************ 00:15:59.991 START TEST raid5f_superblock_test 00:15:59.991 ************************************ 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81579 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81579 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81579 ']' 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:59.991 12:46:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.991 [2024-11-06 12:46:48.520422] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:15:59.991 [2024-11-06 12:46:48.520621] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81579 ] 00:16:00.249 [2024-11-06 12:46:48.700680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.249 [2024-11-06 12:46:48.830447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.507 [2024-11-06 12:46:49.036152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.507 [2024-11-06 12:46:49.036218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.074 malloc1 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.074 [2024-11-06 12:46:49.528158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.074 [2024-11-06 12:46:49.528522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.074 [2024-11-06 12:46:49.528562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.074 [2024-11-06 12:46:49.528577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.074 [2024-11-06 12:46:49.531212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.074 [2024-11-06 12:46:49.531250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.074 pt1 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.074 malloc2 00:16:01.074 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.075 [2024-11-06 12:46:49.582518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.075 [2024-11-06 12:46:49.582580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.075 [2024-11-06 12:46:49.582612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.075 [2024-11-06 12:46:49.582626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.075 [2024-11-06 12:46:49.585178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.075 [2024-11-06 12:46:49.585237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.075 pt2 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.075 malloc3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.075 [2024-11-06 12:46:49.639571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:01.075 [2024-11-06 12:46:49.639650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.075 [2024-11-06 12:46:49.639696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:01.075 [2024-11-06 12:46:49.639710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.075 [2024-11-06 12:46:49.642241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.075 [2024-11-06 12:46:49.642297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:01.075 pt3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.075 [2024-11-06 12:46:49.651655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.075 [2024-11-06 12:46:49.653923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.075 [2024-11-06 12:46:49.654311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.075 [2024-11-06 12:46:49.654532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:01.075 [2024-11-06 12:46:49.654561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:01.075 [2024-11-06 12:46:49.654856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:01.075 [2024-11-06 12:46:49.659836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:01.075 [2024-11-06 12:46:49.659860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:01.075 [2024-11-06 12:46:49.660059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.075 "name": "raid_bdev1", 00:16:01.075 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:01.075 "strip_size_kb": 64, 00:16:01.075 "state": "online", 00:16:01.075 "raid_level": "raid5f", 00:16:01.075 "superblock": true, 00:16:01.075 "num_base_bdevs": 3, 00:16:01.075 "num_base_bdevs_discovered": 3, 00:16:01.075 "num_base_bdevs_operational": 3, 00:16:01.075 "base_bdevs_list": [ 00:16:01.075 { 00:16:01.075 "name": "pt1", 00:16:01.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.075 "is_configured": true, 00:16:01.075 "data_offset": 2048, 00:16:01.075 "data_size": 63488 00:16:01.075 }, 00:16:01.075 { 00:16:01.075 "name": "pt2", 00:16:01.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.075 "is_configured": true, 00:16:01.075 "data_offset": 2048, 00:16:01.075 "data_size": 63488 00:16:01.075 }, 00:16:01.075 { 00:16:01.075 "name": "pt3", 00:16:01.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.075 "is_configured": true, 00:16:01.075 "data_offset": 2048, 00:16:01.075 "data_size": 63488 00:16:01.075 } 00:16:01.075 ] 00:16:01.075 }' 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.075 12:46:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.640 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.641 [2024-11-06 12:46:50.185791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:01.641 "name": "raid_bdev1", 00:16:01.641 "aliases": [ 00:16:01.641 "57f693cc-67f9-4f23-8cfc-cd5a498bcc44" 00:16:01.641 ], 00:16:01.641 "product_name": "Raid Volume", 00:16:01.641 "block_size": 512, 00:16:01.641 "num_blocks": 126976, 00:16:01.641 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:01.641 "assigned_rate_limits": { 00:16:01.641 "rw_ios_per_sec": 0, 00:16:01.641 "rw_mbytes_per_sec": 0, 00:16:01.641 "r_mbytes_per_sec": 0, 00:16:01.641 "w_mbytes_per_sec": 0 00:16:01.641 }, 00:16:01.641 "claimed": false, 00:16:01.641 "zoned": false, 00:16:01.641 "supported_io_types": { 00:16:01.641 "read": true, 00:16:01.641 "write": true, 00:16:01.641 "unmap": false, 00:16:01.641 "flush": false, 00:16:01.641 "reset": true, 00:16:01.641 "nvme_admin": false, 00:16:01.641 "nvme_io": false, 00:16:01.641 "nvme_io_md": false, 00:16:01.641 "write_zeroes": true, 00:16:01.641 "zcopy": false, 00:16:01.641 "get_zone_info": false, 00:16:01.641 "zone_management": false, 00:16:01.641 "zone_append": false, 00:16:01.641 "compare": false, 00:16:01.641 "compare_and_write": false, 00:16:01.641 "abort": false, 00:16:01.641 "seek_hole": false, 00:16:01.641 "seek_data": false, 00:16:01.641 "copy": false, 00:16:01.641 "nvme_iov_md": false 00:16:01.641 }, 00:16:01.641 "driver_specific": { 00:16:01.641 "raid": { 00:16:01.641 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:01.641 "strip_size_kb": 64, 00:16:01.641 "state": "online", 00:16:01.641 "raid_level": "raid5f", 00:16:01.641 "superblock": true, 00:16:01.641 "num_base_bdevs": 3, 00:16:01.641 "num_base_bdevs_discovered": 3, 00:16:01.641 "num_base_bdevs_operational": 3, 00:16:01.641 "base_bdevs_list": [ 00:16:01.641 { 00:16:01.641 "name": "pt1", 00:16:01.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.641 "is_configured": true, 00:16:01.641 "data_offset": 2048, 00:16:01.641 "data_size": 63488 00:16:01.641 }, 00:16:01.641 { 00:16:01.641 "name": "pt2", 00:16:01.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.641 "is_configured": true, 00:16:01.641 "data_offset": 2048, 00:16:01.641 "data_size": 63488 00:16:01.641 }, 00:16:01.641 { 00:16:01.641 "name": "pt3", 00:16:01.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.641 "is_configured": true, 00:16:01.641 "data_offset": 2048, 00:16:01.641 "data_size": 63488 00:16:01.641 } 00:16:01.641 ] 00:16:01.641 } 00:16:01.641 } 00:16:01.641 }' 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:01.641 pt2 00:16:01.641 pt3' 00:16:01.641 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:01.899 [2024-11-06 12:46:50.509813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.899 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=57f693cc-67f9-4f23-8cfc-cd5a498bcc44 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 57f693cc-67f9-4f23-8cfc-cd5a498bcc44 ']' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 [2024-11-06 12:46:50.561593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.159 [2024-11-06 12:46:50.561630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.159 [2024-11-06 12:46:50.561707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.159 [2024-11-06 12:46:50.561801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.159 [2024-11-06 12:46:50.561817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 [2024-11-06 12:46:50.709687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:02.159 [2024-11-06 12:46:50.712002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:02.159 [2024-11-06 12:46:50.712061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:02.159 [2024-11-06 12:46:50.712124] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:02.159 [2024-11-06 12:46:50.712341] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:02.159 [2024-11-06 12:46:50.712441] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:02.159 [2024-11-06 12:46:50.712632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.159 [2024-11-06 12:46:50.712683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:02.159 request: 00:16:02.159 { 00:16:02.159 "name": "raid_bdev1", 00:16:02.159 "raid_level": "raid5f", 00:16:02.159 "base_bdevs": [ 00:16:02.159 "malloc1", 00:16:02.159 "malloc2", 00:16:02.159 "malloc3" 00:16:02.159 ], 00:16:02.159 "strip_size_kb": 64, 00:16:02.159 "superblock": false, 00:16:02.159 "method": "bdev_raid_create", 00:16:02.159 "req_id": 1 00:16:02.159 } 00:16:02.159 Got JSON-RPC error response 00:16:02.159 response: 00:16:02.159 { 00:16:02.159 "code": -17, 00:16:02.159 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:02.159 } 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.159 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.159 [2024-11-06 12:46:50.781626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.160 [2024-11-06 12:46:50.781807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.160 [2024-11-06 12:46:50.782014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:02.160 [2024-11-06 12:46:50.782120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.160 [2024-11-06 12:46:50.784957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.160 [2024-11-06 12:46:50.785132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.160 [2024-11-06 12:46:50.785334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.160 [2024-11-06 12:46:50.785497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.160 pt1 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.160 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.418 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.418 "name": "raid_bdev1", 00:16:02.418 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:02.418 "strip_size_kb": 64, 00:16:02.418 "state": "configuring", 00:16:02.418 "raid_level": "raid5f", 00:16:02.418 "superblock": true, 00:16:02.418 "num_base_bdevs": 3, 00:16:02.418 "num_base_bdevs_discovered": 1, 00:16:02.418 "num_base_bdevs_operational": 3, 00:16:02.418 "base_bdevs_list": [ 00:16:02.418 { 00:16:02.418 "name": "pt1", 00:16:02.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.418 "is_configured": true, 00:16:02.418 "data_offset": 2048, 00:16:02.418 "data_size": 63488 00:16:02.418 }, 00:16:02.418 { 00:16:02.418 "name": null, 00:16:02.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.418 "is_configured": false, 00:16:02.418 "data_offset": 2048, 00:16:02.418 "data_size": 63488 00:16:02.418 }, 00:16:02.418 { 00:16:02.418 "name": null, 00:16:02.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.418 "is_configured": false, 00:16:02.418 "data_offset": 2048, 00:16:02.418 "data_size": 63488 00:16:02.418 } 00:16:02.418 ] 00:16:02.418 }' 00:16:02.418 12:46:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.418 12:46:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.676 [2024-11-06 12:46:51.294030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.676 [2024-11-06 12:46:51.294496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.676 [2024-11-06 12:46:51.294596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:02.676 [2024-11-06 12:46:51.294618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.676 [2024-11-06 12:46:51.295530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.676 [2024-11-06 12:46:51.295588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.676 [2024-11-06 12:46:51.295759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:02.676 [2024-11-06 12:46:51.295800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.676 pt2 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.676 [2024-11-06 12:46:51.302063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.676 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.934 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.934 "name": "raid_bdev1", 00:16:02.934 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:02.934 "strip_size_kb": 64, 00:16:02.934 "state": "configuring", 00:16:02.934 "raid_level": "raid5f", 00:16:02.934 "superblock": true, 00:16:02.934 "num_base_bdevs": 3, 00:16:02.934 "num_base_bdevs_discovered": 1, 00:16:02.934 "num_base_bdevs_operational": 3, 00:16:02.934 "base_bdevs_list": [ 00:16:02.934 { 00:16:02.934 "name": "pt1", 00:16:02.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.934 "is_configured": true, 00:16:02.934 "data_offset": 2048, 00:16:02.934 "data_size": 63488 00:16:02.934 }, 00:16:02.934 { 00:16:02.934 "name": null, 00:16:02.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.934 "is_configured": false, 00:16:02.934 "data_offset": 0, 00:16:02.934 "data_size": 63488 00:16:02.934 }, 00:16:02.934 { 00:16:02.934 "name": null, 00:16:02.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.934 "is_configured": false, 00:16:02.934 "data_offset": 2048, 00:16:02.934 "data_size": 63488 00:16:02.934 } 00:16:02.934 ] 00:16:02.934 }' 00:16:02.934 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.934 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.192 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:03.192 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.192 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.192 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.192 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.192 [2024-11-06 12:46:51.830171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.192 [2024-11-06 12:46:51.830298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.192 [2024-11-06 12:46:51.830334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:03.192 [2024-11-06 12:46:51.830355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.192 [2024-11-06 12:46:51.831045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.192 [2024-11-06 12:46:51.831092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.192 [2024-11-06 12:46:51.831238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.192 [2024-11-06 12:46:51.831280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.192 pt2 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.193 [2024-11-06 12:46:51.838108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:03.193 [2024-11-06 12:46:51.838316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.193 [2024-11-06 12:46:51.838349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.193 [2024-11-06 12:46:51.838368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.193 [2024-11-06 12:46:51.838836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.193 [2024-11-06 12:46:51.838879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:03.193 [2024-11-06 12:46:51.838956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:03.193 [2024-11-06 12:46:51.838989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:03.193 [2024-11-06 12:46:51.839174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:03.193 [2024-11-06 12:46:51.839216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.193 [2024-11-06 12:46:51.839555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:03.193 [2024-11-06 12:46:51.844748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:03.193 [2024-11-06 12:46:51.844774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:03.193 [2024-11-06 12:46:51.845021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.193 pt3 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:03.193 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.452 "name": "raid_bdev1", 00:16:03.452 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:03.452 "strip_size_kb": 64, 00:16:03.452 "state": "online", 00:16:03.452 "raid_level": "raid5f", 00:16:03.452 "superblock": true, 00:16:03.452 "num_base_bdevs": 3, 00:16:03.452 "num_base_bdevs_discovered": 3, 00:16:03.452 "num_base_bdevs_operational": 3, 00:16:03.452 "base_bdevs_list": [ 00:16:03.452 { 00:16:03.452 "name": "pt1", 00:16:03.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.452 "is_configured": true, 00:16:03.452 "data_offset": 2048, 00:16:03.452 "data_size": 63488 00:16:03.452 }, 00:16:03.452 { 00:16:03.452 "name": "pt2", 00:16:03.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.452 "is_configured": true, 00:16:03.452 "data_offset": 2048, 00:16:03.452 "data_size": 63488 00:16:03.452 }, 00:16:03.452 { 00:16:03.452 "name": "pt3", 00:16:03.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.452 "is_configured": true, 00:16:03.452 "data_offset": 2048, 00:16:03.452 "data_size": 63488 00:16:03.452 } 00:16:03.452 ] 00:16:03.452 }' 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.452 12:46:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.710 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.968 [2024-11-06 12:46:52.367609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.968 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.968 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.968 "name": "raid_bdev1", 00:16:03.968 "aliases": [ 00:16:03.968 "57f693cc-67f9-4f23-8cfc-cd5a498bcc44" 00:16:03.968 ], 00:16:03.968 "product_name": "Raid Volume", 00:16:03.968 "block_size": 512, 00:16:03.968 "num_blocks": 126976, 00:16:03.968 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:03.968 "assigned_rate_limits": { 00:16:03.968 "rw_ios_per_sec": 0, 00:16:03.968 "rw_mbytes_per_sec": 0, 00:16:03.968 "r_mbytes_per_sec": 0, 00:16:03.968 "w_mbytes_per_sec": 0 00:16:03.968 }, 00:16:03.968 "claimed": false, 00:16:03.968 "zoned": false, 00:16:03.968 "supported_io_types": { 00:16:03.968 "read": true, 00:16:03.968 "write": true, 00:16:03.968 "unmap": false, 00:16:03.968 "flush": false, 00:16:03.968 "reset": true, 00:16:03.968 "nvme_admin": false, 00:16:03.968 "nvme_io": false, 00:16:03.968 "nvme_io_md": false, 00:16:03.968 "write_zeroes": true, 00:16:03.968 "zcopy": false, 00:16:03.968 "get_zone_info": false, 00:16:03.968 "zone_management": false, 00:16:03.968 "zone_append": false, 00:16:03.968 "compare": false, 00:16:03.968 "compare_and_write": false, 00:16:03.968 "abort": false, 00:16:03.968 "seek_hole": false, 00:16:03.968 "seek_data": false, 00:16:03.968 "copy": false, 00:16:03.968 "nvme_iov_md": false 00:16:03.968 }, 00:16:03.968 "driver_specific": { 00:16:03.968 "raid": { 00:16:03.968 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:03.968 "strip_size_kb": 64, 00:16:03.968 "state": "online", 00:16:03.968 "raid_level": "raid5f", 00:16:03.968 "superblock": true, 00:16:03.968 "num_base_bdevs": 3, 00:16:03.968 "num_base_bdevs_discovered": 3, 00:16:03.968 "num_base_bdevs_operational": 3, 00:16:03.968 "base_bdevs_list": [ 00:16:03.968 { 00:16:03.968 "name": "pt1", 00:16:03.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.969 "is_configured": true, 00:16:03.969 "data_offset": 2048, 00:16:03.969 "data_size": 63488 00:16:03.969 }, 00:16:03.969 { 00:16:03.969 "name": "pt2", 00:16:03.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.969 "is_configured": true, 00:16:03.969 "data_offset": 2048, 00:16:03.969 "data_size": 63488 00:16:03.969 }, 00:16:03.969 { 00:16:03.969 "name": "pt3", 00:16:03.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.969 "is_configured": true, 00:16:03.969 "data_offset": 2048, 00:16:03.969 "data_size": 63488 00:16:03.969 } 00:16:03.969 ] 00:16:03.969 } 00:16:03.969 } 00:16:03.969 }' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:03.969 pt2 00:16:03.969 pt3' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.969 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:04.227 [2024-11-06 12:46:52.707579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 57f693cc-67f9-4f23-8cfc-cd5a498bcc44 '!=' 57f693cc-67f9-4f23-8cfc-cd5a498bcc44 ']' 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.227 [2024-11-06 12:46:52.755429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.227 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.228 "name": "raid_bdev1", 00:16:04.228 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:04.228 "strip_size_kb": 64, 00:16:04.228 "state": "online", 00:16:04.228 "raid_level": "raid5f", 00:16:04.228 "superblock": true, 00:16:04.228 "num_base_bdevs": 3, 00:16:04.228 "num_base_bdevs_discovered": 2, 00:16:04.228 "num_base_bdevs_operational": 2, 00:16:04.228 "base_bdevs_list": [ 00:16:04.228 { 00:16:04.228 "name": null, 00:16:04.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.228 "is_configured": false, 00:16:04.228 "data_offset": 0, 00:16:04.228 "data_size": 63488 00:16:04.228 }, 00:16:04.228 { 00:16:04.228 "name": "pt2", 00:16:04.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.228 "is_configured": true, 00:16:04.228 "data_offset": 2048, 00:16:04.228 "data_size": 63488 00:16:04.228 }, 00:16:04.228 { 00:16:04.228 "name": "pt3", 00:16:04.228 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.228 "is_configured": true, 00:16:04.228 "data_offset": 2048, 00:16:04.228 "data_size": 63488 00:16:04.228 } 00:16:04.228 ] 00:16:04.228 }' 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.228 12:46:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 [2024-11-06 12:46:53.283557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.795 [2024-11-06 12:46:53.283605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.795 [2024-11-06 12:46:53.283720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.795 [2024-11-06 12:46:53.283808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.795 [2024-11-06 12:46:53.283834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 [2024-11-06 12:46:53.359494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.795 [2024-11-06 12:46:53.359754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.795 [2024-11-06 12:46:53.359793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:04.795 [2024-11-06 12:46:53.359812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.795 [2024-11-06 12:46:53.362874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.795 [2024-11-06 12:46:53.363044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.795 [2024-11-06 12:46:53.363163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.795 [2024-11-06 12:46:53.363253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.795 pt2 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.795 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.795 "name": "raid_bdev1", 00:16:04.795 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:04.795 "strip_size_kb": 64, 00:16:04.795 "state": "configuring", 00:16:04.795 "raid_level": "raid5f", 00:16:04.795 "superblock": true, 00:16:04.795 "num_base_bdevs": 3, 00:16:04.795 "num_base_bdevs_discovered": 1, 00:16:04.795 "num_base_bdevs_operational": 2, 00:16:04.795 "base_bdevs_list": [ 00:16:04.795 { 00:16:04.795 "name": null, 00:16:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.795 "is_configured": false, 00:16:04.795 "data_offset": 2048, 00:16:04.795 "data_size": 63488 00:16:04.795 }, 00:16:04.795 { 00:16:04.795 "name": "pt2", 00:16:04.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.795 "is_configured": true, 00:16:04.795 "data_offset": 2048, 00:16:04.795 "data_size": 63488 00:16:04.795 }, 00:16:04.795 { 00:16:04.795 "name": null, 00:16:04.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.795 "is_configured": false, 00:16:04.795 "data_offset": 2048, 00:16:04.795 "data_size": 63488 00:16:04.795 } 00:16:04.795 ] 00:16:04.795 }' 00:16:04.796 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.796 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.370 [2024-11-06 12:46:53.915715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:05.370 [2024-11-06 12:46:53.915826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.370 [2024-11-06 12:46:53.915880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:05.370 [2024-11-06 12:46:53.915907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.370 [2024-11-06 12:46:53.916775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.370 [2024-11-06 12:46:53.916979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:05.370 [2024-11-06 12:46:53.917276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:05.370 [2024-11-06 12:46:53.917448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:05.370 [2024-11-06 12:46:53.917725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:05.370 [2024-11-06 12:46:53.917757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:05.370 [2024-11-06 12:46:53.918100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:05.370 [2024-11-06 12:46:53.923337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:05.370 [2024-11-06 12:46:53.923481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:05.370 [2024-11-06 12:46:53.923919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.370 pt3 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.370 "name": "raid_bdev1", 00:16:05.370 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:05.370 "strip_size_kb": 64, 00:16:05.370 "state": "online", 00:16:05.370 "raid_level": "raid5f", 00:16:05.370 "superblock": true, 00:16:05.370 "num_base_bdevs": 3, 00:16:05.370 "num_base_bdevs_discovered": 2, 00:16:05.370 "num_base_bdevs_operational": 2, 00:16:05.370 "base_bdevs_list": [ 00:16:05.370 { 00:16:05.370 "name": null, 00:16:05.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.370 "is_configured": false, 00:16:05.370 "data_offset": 2048, 00:16:05.370 "data_size": 63488 00:16:05.370 }, 00:16:05.370 { 00:16:05.370 "name": "pt2", 00:16:05.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.370 "is_configured": true, 00:16:05.370 "data_offset": 2048, 00:16:05.370 "data_size": 63488 00:16:05.370 }, 00:16:05.370 { 00:16:05.370 "name": "pt3", 00:16:05.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.370 "is_configured": true, 00:16:05.370 "data_offset": 2048, 00:16:05.370 "data_size": 63488 00:16:05.370 } 00:16:05.370 ] 00:16:05.370 }' 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.370 12:46:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 [2024-11-06 12:46:54.446257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.952 [2024-11-06 12:46:54.446304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.952 [2024-11-06 12:46:54.446429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.952 [2024-11-06 12:46:54.446521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.952 [2024-11-06 12:46:54.446538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 [2024-11-06 12:46:54.518251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:05.952 [2024-11-06 12:46:54.518328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.952 [2024-11-06 12:46:54.518372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:05.952 [2024-11-06 12:46:54.518390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.952 [2024-11-06 12:46:54.521977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.952 [2024-11-06 12:46:54.522025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:05.952 [2024-11-06 12:46:54.522149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:05.952 [2024-11-06 12:46:54.522227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:05.952 [2024-11-06 12:46:54.522398] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:05.952 [2024-11-06 12:46:54.522416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.952 [2024-11-06 12:46:54.522444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:05.952 [2024-11-06 12:46:54.522539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.952 pt1 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.952 "name": "raid_bdev1", 00:16:05.952 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:05.952 "strip_size_kb": 64, 00:16:05.952 "state": "configuring", 00:16:05.952 "raid_level": "raid5f", 00:16:05.952 "superblock": true, 00:16:05.952 "num_base_bdevs": 3, 00:16:05.952 "num_base_bdevs_discovered": 1, 00:16:05.952 "num_base_bdevs_operational": 2, 00:16:05.952 "base_bdevs_list": [ 00:16:05.952 { 00:16:05.952 "name": null, 00:16:05.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.952 "is_configured": false, 00:16:05.952 "data_offset": 2048, 00:16:05.952 "data_size": 63488 00:16:05.952 }, 00:16:05.952 { 00:16:05.952 "name": "pt2", 00:16:05.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.952 "is_configured": true, 00:16:05.952 "data_offset": 2048, 00:16:05.952 "data_size": 63488 00:16:05.952 }, 00:16:05.952 { 00:16:05.952 "name": null, 00:16:05.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.952 "is_configured": false, 00:16:05.952 "data_offset": 2048, 00:16:05.952 "data_size": 63488 00:16:05.952 } 00:16:05.952 ] 00:16:05.952 }' 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.952 12:46:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.519 [2024-11-06 12:46:55.114727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:06.519 [2024-11-06 12:46:55.114949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.519 [2024-11-06 12:46:55.114999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:06.519 [2024-11-06 12:46:55.115016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.519 [2024-11-06 12:46:55.115743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.519 [2024-11-06 12:46:55.115776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:06.519 [2024-11-06 12:46:55.115897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:06.519 [2024-11-06 12:46:55.115939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:06.519 [2024-11-06 12:46:55.116102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:06.519 [2024-11-06 12:46:55.116118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:06.519 [2024-11-06 12:46:55.116455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:06.519 [2024-11-06 12:46:55.121595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:06.519 [2024-11-06 12:46:55.121747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:06.519 [2024-11-06 12:46:55.122229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.519 pt3 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.519 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.777 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.777 "name": "raid_bdev1", 00:16:06.777 "uuid": "57f693cc-67f9-4f23-8cfc-cd5a498bcc44", 00:16:06.777 "strip_size_kb": 64, 00:16:06.777 "state": "online", 00:16:06.777 "raid_level": "raid5f", 00:16:06.777 "superblock": true, 00:16:06.777 "num_base_bdevs": 3, 00:16:06.777 "num_base_bdevs_discovered": 2, 00:16:06.777 "num_base_bdevs_operational": 2, 00:16:06.777 "base_bdevs_list": [ 00:16:06.777 { 00:16:06.777 "name": null, 00:16:06.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.777 "is_configured": false, 00:16:06.777 "data_offset": 2048, 00:16:06.777 "data_size": 63488 00:16:06.777 }, 00:16:06.777 { 00:16:06.777 "name": "pt2", 00:16:06.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.777 "is_configured": true, 00:16:06.777 "data_offset": 2048, 00:16:06.777 "data_size": 63488 00:16:06.777 }, 00:16:06.777 { 00:16:06.777 "name": "pt3", 00:16:06.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.778 "is_configured": true, 00:16:06.778 "data_offset": 2048, 00:16:06.778 "data_size": 63488 00:16:06.778 } 00:16:06.778 ] 00:16:06.778 }' 00:16:06.778 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.778 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.035 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:07.035 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.036 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.036 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:07.036 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.036 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.294 [2024-11-06 12:46:55.696752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 57f693cc-67f9-4f23-8cfc-cd5a498bcc44 '!=' 57f693cc-67f9-4f23-8cfc-cd5a498bcc44 ']' 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81579 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81579 ']' 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81579 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81579 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:07.294 killing process with pid 81579 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81579' 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81579 00:16:07.294 [2024-11-06 12:46:55.774469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.294 12:46:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81579 00:16:07.294 [2024-11-06 12:46:55.774599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.294 [2024-11-06 12:46:55.774688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.294 [2024-11-06 12:46:55.774708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:07.552 [2024-11-06 12:46:56.058213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.925 ************************************ 00:16:08.925 END TEST raid5f_superblock_test 00:16:08.925 ************************************ 00:16:08.925 12:46:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:08.925 00:16:08.925 real 0m8.758s 00:16:08.925 user 0m14.224s 00:16:08.925 sys 0m1.311s 00:16:08.925 12:46:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:08.925 12:46:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.925 12:46:57 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:08.925 12:46:57 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:08.925 12:46:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:08.925 12:46:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:08.925 12:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.925 ************************************ 00:16:08.925 START TEST raid5f_rebuild_test 00:16:08.925 ************************************ 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.925 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82027 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82027 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82027 ']' 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:08.926 12:46:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.926 [2024-11-06 12:46:57.334225] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:16:08.926 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:08.926 Zero copy mechanism will not be used. 00:16:08.926 [2024-11-06 12:46:57.334564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82027 ] 00:16:08.926 [2024-11-06 12:46:57.513914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.184 [2024-11-06 12:46:57.674553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.442 [2024-11-06 12:46:57.899124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.442 [2024-11-06 12:46:57.899200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.007 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:10.007 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:10.007 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 BaseBdev1_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 [2024-11-06 12:46:58.422092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.008 [2024-11-06 12:46:58.422198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.008 [2024-11-06 12:46:58.422268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.008 [2024-11-06 12:46:58.422290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.008 [2024-11-06 12:46:58.425384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.008 [2024-11-06 12:46:58.425437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.008 BaseBdev1 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 BaseBdev2_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 [2024-11-06 12:46:58.478762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:10.008 [2024-11-06 12:46:58.478842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.008 [2024-11-06 12:46:58.478874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:10.008 [2024-11-06 12:46:58.478896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.008 [2024-11-06 12:46:58.481827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.008 [2024-11-06 12:46:58.481879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.008 BaseBdev2 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 BaseBdev3_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 [2024-11-06 12:46:58.549825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:10.008 [2024-11-06 12:46:58.549930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.008 [2024-11-06 12:46:58.549965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:10.008 [2024-11-06 12:46:58.549986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.008 [2024-11-06 12:46:58.552937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.008 [2024-11-06 12:46:58.553134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:10.008 BaseBdev3 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 spare_malloc 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 spare_delay 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 [2024-11-06 12:46:58.615305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.008 [2024-11-06 12:46:58.615389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.008 [2024-11-06 12:46:58.615418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:10.008 [2024-11-06 12:46:58.615437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.008 [2024-11-06 12:46:58.618461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.008 [2024-11-06 12:46:58.618529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.008 spare 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 [2024-11-06 12:46:58.623488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.008 [2024-11-06 12:46:58.626279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.008 [2024-11-06 12:46:58.626488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.008 [2024-11-06 12:46:58.626758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.008 [2024-11-06 12:46:58.626871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:10.008 [2024-11-06 12:46:58.627266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:10.008 [2024-11-06 12:46:58.632621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.008 [2024-11-06 12:46:58.632759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.008 [2024-11-06 12:46:58.633136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.008 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.267 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.267 "name": "raid_bdev1", 00:16:10.267 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:10.267 "strip_size_kb": 64, 00:16:10.267 "state": "online", 00:16:10.267 "raid_level": "raid5f", 00:16:10.267 "superblock": false, 00:16:10.267 "num_base_bdevs": 3, 00:16:10.267 "num_base_bdevs_discovered": 3, 00:16:10.267 "num_base_bdevs_operational": 3, 00:16:10.267 "base_bdevs_list": [ 00:16:10.267 { 00:16:10.267 "name": "BaseBdev1", 00:16:10.267 "uuid": "77b8d93b-a11e-57b3-a52d-fee913e8a35d", 00:16:10.267 "is_configured": true, 00:16:10.267 "data_offset": 0, 00:16:10.267 "data_size": 65536 00:16:10.267 }, 00:16:10.267 { 00:16:10.267 "name": "BaseBdev2", 00:16:10.267 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:10.267 "is_configured": true, 00:16:10.267 "data_offset": 0, 00:16:10.267 "data_size": 65536 00:16:10.267 }, 00:16:10.267 { 00:16:10.267 "name": "BaseBdev3", 00:16:10.267 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:10.267 "is_configured": true, 00:16:10.267 "data_offset": 0, 00:16:10.267 "data_size": 65536 00:16:10.267 } 00:16:10.267 ] 00:16:10.267 }' 00:16:10.267 12:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.267 12:46:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.525 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.525 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:10.525 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.525 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.525 [2024-11-06 12:46:59.163726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.783 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:11.043 [2024-11-06 12:46:59.563620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.043 /dev/nbd0 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.043 1+0 records in 00:16:11.043 1+0 records out 00:16:11.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660237 s, 6.2 MB/s 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:11.043 12:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:11.673 512+0 records in 00:16:11.673 512+0 records out 00:16:11.673 67108864 bytes (67 MB, 64 MiB) copied, 0.46217 s, 145 MB/s 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.673 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.931 [2024-11-06 12:47:00.370012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.931 [2024-11-06 12:47:00.384807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.931 "name": "raid_bdev1", 00:16:11.931 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:11.931 "strip_size_kb": 64, 00:16:11.931 "state": "online", 00:16:11.931 "raid_level": "raid5f", 00:16:11.931 "superblock": false, 00:16:11.931 "num_base_bdevs": 3, 00:16:11.931 "num_base_bdevs_discovered": 2, 00:16:11.931 "num_base_bdevs_operational": 2, 00:16:11.931 "base_bdevs_list": [ 00:16:11.931 { 00:16:11.931 "name": null, 00:16:11.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.931 "is_configured": false, 00:16:11.931 "data_offset": 0, 00:16:11.931 "data_size": 65536 00:16:11.931 }, 00:16:11.931 { 00:16:11.931 "name": "BaseBdev2", 00:16:11.931 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:11.931 "is_configured": true, 00:16:11.931 "data_offset": 0, 00:16:11.931 "data_size": 65536 00:16:11.931 }, 00:16:11.931 { 00:16:11.931 "name": "BaseBdev3", 00:16:11.931 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:11.931 "is_configured": true, 00:16:11.931 "data_offset": 0, 00:16:11.931 "data_size": 65536 00:16:11.931 } 00:16:11.931 ] 00:16:11.931 }' 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.931 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.497 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.497 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.497 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.497 [2024-11-06 12:47:00.933011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.497 [2024-11-06 12:47:00.949320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:12.497 12:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.497 12:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.497 [2024-11-06 12:47:00.957077] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.429 12:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.430 12:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.430 12:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.430 12:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.430 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.430 "name": "raid_bdev1", 00:16:13.430 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:13.430 "strip_size_kb": 64, 00:16:13.430 "state": "online", 00:16:13.430 "raid_level": "raid5f", 00:16:13.430 "superblock": false, 00:16:13.430 "num_base_bdevs": 3, 00:16:13.430 "num_base_bdevs_discovered": 3, 00:16:13.430 "num_base_bdevs_operational": 3, 00:16:13.430 "process": { 00:16:13.430 "type": "rebuild", 00:16:13.430 "target": "spare", 00:16:13.430 "progress": { 00:16:13.430 "blocks": 18432, 00:16:13.430 "percent": 14 00:16:13.430 } 00:16:13.430 }, 00:16:13.430 "base_bdevs_list": [ 00:16:13.430 { 00:16:13.430 "name": "spare", 00:16:13.430 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:13.430 "is_configured": true, 00:16:13.430 "data_offset": 0, 00:16:13.430 "data_size": 65536 00:16:13.430 }, 00:16:13.430 { 00:16:13.430 "name": "BaseBdev2", 00:16:13.430 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:13.430 "is_configured": true, 00:16:13.430 "data_offset": 0, 00:16:13.430 "data_size": 65536 00:16:13.430 }, 00:16:13.430 { 00:16:13.430 "name": "BaseBdev3", 00:16:13.430 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:13.430 "is_configured": true, 00:16:13.430 "data_offset": 0, 00:16:13.430 "data_size": 65536 00:16:13.430 } 00:16:13.430 ] 00:16:13.430 }' 00:16:13.430 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.430 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.430 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.687 [2024-11-06 12:47:02.123333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.687 [2024-11-06 12:47:02.175783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.687 [2024-11-06 12:47:02.176092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.687 [2024-11-06 12:47:02.176139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.687 [2024-11-06 12:47:02.176168] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.687 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.687 "name": "raid_bdev1", 00:16:13.687 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:13.687 "strip_size_kb": 64, 00:16:13.687 "state": "online", 00:16:13.687 "raid_level": "raid5f", 00:16:13.687 "superblock": false, 00:16:13.687 "num_base_bdevs": 3, 00:16:13.687 "num_base_bdevs_discovered": 2, 00:16:13.687 "num_base_bdevs_operational": 2, 00:16:13.687 "base_bdevs_list": [ 00:16:13.687 { 00:16:13.687 "name": null, 00:16:13.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.687 "is_configured": false, 00:16:13.687 "data_offset": 0, 00:16:13.687 "data_size": 65536 00:16:13.687 }, 00:16:13.687 { 00:16:13.688 "name": "BaseBdev2", 00:16:13.688 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:13.688 "is_configured": true, 00:16:13.688 "data_offset": 0, 00:16:13.688 "data_size": 65536 00:16:13.688 }, 00:16:13.688 { 00:16:13.688 "name": "BaseBdev3", 00:16:13.688 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:13.688 "is_configured": true, 00:16:13.688 "data_offset": 0, 00:16:13.688 "data_size": 65536 00:16:13.688 } 00:16:13.688 ] 00:16:13.688 }' 00:16:13.688 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.688 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.254 "name": "raid_bdev1", 00:16:14.254 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:14.254 "strip_size_kb": 64, 00:16:14.254 "state": "online", 00:16:14.254 "raid_level": "raid5f", 00:16:14.254 "superblock": false, 00:16:14.254 "num_base_bdevs": 3, 00:16:14.254 "num_base_bdevs_discovered": 2, 00:16:14.254 "num_base_bdevs_operational": 2, 00:16:14.254 "base_bdevs_list": [ 00:16:14.254 { 00:16:14.254 "name": null, 00:16:14.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.254 "is_configured": false, 00:16:14.254 "data_offset": 0, 00:16:14.254 "data_size": 65536 00:16:14.254 }, 00:16:14.254 { 00:16:14.254 "name": "BaseBdev2", 00:16:14.254 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:14.254 "is_configured": true, 00:16:14.254 "data_offset": 0, 00:16:14.254 "data_size": 65536 00:16:14.254 }, 00:16:14.254 { 00:16:14.254 "name": "BaseBdev3", 00:16:14.254 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:14.254 "is_configured": true, 00:16:14.254 "data_offset": 0, 00:16:14.254 "data_size": 65536 00:16:14.254 } 00:16:14.254 ] 00:16:14.254 }' 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.254 [2024-11-06 12:47:02.855270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.254 [2024-11-06 12:47:02.871206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.254 12:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.254 [2024-11-06 12:47:02.879019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:15.635 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.635 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.635 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.635 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.635 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.636 "name": "raid_bdev1", 00:16:15.636 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:15.636 "strip_size_kb": 64, 00:16:15.636 "state": "online", 00:16:15.636 "raid_level": "raid5f", 00:16:15.636 "superblock": false, 00:16:15.636 "num_base_bdevs": 3, 00:16:15.636 "num_base_bdevs_discovered": 3, 00:16:15.636 "num_base_bdevs_operational": 3, 00:16:15.636 "process": { 00:16:15.636 "type": "rebuild", 00:16:15.636 "target": "spare", 00:16:15.636 "progress": { 00:16:15.636 "blocks": 18432, 00:16:15.636 "percent": 14 00:16:15.636 } 00:16:15.636 }, 00:16:15.636 "base_bdevs_list": [ 00:16:15.636 { 00:16:15.636 "name": "spare", 00:16:15.636 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:15.636 "is_configured": true, 00:16:15.636 "data_offset": 0, 00:16:15.636 "data_size": 65536 00:16:15.636 }, 00:16:15.636 { 00:16:15.636 "name": "BaseBdev2", 00:16:15.636 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:15.636 "is_configured": true, 00:16:15.636 "data_offset": 0, 00:16:15.636 "data_size": 65536 00:16:15.636 }, 00:16:15.636 { 00:16:15.636 "name": "BaseBdev3", 00:16:15.636 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:15.636 "is_configured": true, 00:16:15.636 "data_offset": 0, 00:16:15.636 "data_size": 65536 00:16:15.636 } 00:16:15.636 ] 00:16:15.636 }' 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.636 12:47:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=598 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.636 "name": "raid_bdev1", 00:16:15.636 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:15.636 "strip_size_kb": 64, 00:16:15.636 "state": "online", 00:16:15.636 "raid_level": "raid5f", 00:16:15.636 "superblock": false, 00:16:15.636 "num_base_bdevs": 3, 00:16:15.636 "num_base_bdevs_discovered": 3, 00:16:15.636 "num_base_bdevs_operational": 3, 00:16:15.636 "process": { 00:16:15.636 "type": "rebuild", 00:16:15.636 "target": "spare", 00:16:15.636 "progress": { 00:16:15.636 "blocks": 22528, 00:16:15.636 "percent": 17 00:16:15.636 } 00:16:15.636 }, 00:16:15.636 "base_bdevs_list": [ 00:16:15.636 { 00:16:15.636 "name": "spare", 00:16:15.636 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:15.636 "is_configured": true, 00:16:15.636 "data_offset": 0, 00:16:15.636 "data_size": 65536 00:16:15.636 }, 00:16:15.636 { 00:16:15.636 "name": "BaseBdev2", 00:16:15.636 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:15.636 "is_configured": true, 00:16:15.636 "data_offset": 0, 00:16:15.636 "data_size": 65536 00:16:15.636 }, 00:16:15.636 { 00:16:15.636 "name": "BaseBdev3", 00:16:15.636 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:15.636 "is_configured": true, 00:16:15.636 "data_offset": 0, 00:16:15.636 "data_size": 65536 00:16:15.636 } 00:16:15.636 ] 00:16:15.636 }' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.636 12:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.570 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.828 12:47:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.828 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.828 "name": "raid_bdev1", 00:16:16.828 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:16.828 "strip_size_kb": 64, 00:16:16.828 "state": "online", 00:16:16.828 "raid_level": "raid5f", 00:16:16.828 "superblock": false, 00:16:16.828 "num_base_bdevs": 3, 00:16:16.829 "num_base_bdevs_discovered": 3, 00:16:16.829 "num_base_bdevs_operational": 3, 00:16:16.829 "process": { 00:16:16.829 "type": "rebuild", 00:16:16.829 "target": "spare", 00:16:16.829 "progress": { 00:16:16.829 "blocks": 47104, 00:16:16.829 "percent": 35 00:16:16.829 } 00:16:16.829 }, 00:16:16.829 "base_bdevs_list": [ 00:16:16.829 { 00:16:16.829 "name": "spare", 00:16:16.829 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:16.829 "is_configured": true, 00:16:16.829 "data_offset": 0, 00:16:16.829 "data_size": 65536 00:16:16.829 }, 00:16:16.829 { 00:16:16.829 "name": "BaseBdev2", 00:16:16.829 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:16.829 "is_configured": true, 00:16:16.829 "data_offset": 0, 00:16:16.829 "data_size": 65536 00:16:16.829 }, 00:16:16.829 { 00:16:16.829 "name": "BaseBdev3", 00:16:16.829 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:16.829 "is_configured": true, 00:16:16.829 "data_offset": 0, 00:16:16.829 "data_size": 65536 00:16:16.829 } 00:16:16.829 ] 00:16:16.829 }' 00:16:16.829 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.829 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.829 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.829 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.829 12:47:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.765 12:47:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.023 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.023 "name": "raid_bdev1", 00:16:18.023 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:18.023 "strip_size_kb": 64, 00:16:18.023 "state": "online", 00:16:18.023 "raid_level": "raid5f", 00:16:18.023 "superblock": false, 00:16:18.023 "num_base_bdevs": 3, 00:16:18.023 "num_base_bdevs_discovered": 3, 00:16:18.023 "num_base_bdevs_operational": 3, 00:16:18.023 "process": { 00:16:18.023 "type": "rebuild", 00:16:18.023 "target": "spare", 00:16:18.023 "progress": { 00:16:18.023 "blocks": 69632, 00:16:18.023 "percent": 53 00:16:18.023 } 00:16:18.023 }, 00:16:18.023 "base_bdevs_list": [ 00:16:18.023 { 00:16:18.023 "name": "spare", 00:16:18.023 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:18.023 "is_configured": true, 00:16:18.023 "data_offset": 0, 00:16:18.023 "data_size": 65536 00:16:18.023 }, 00:16:18.023 { 00:16:18.023 "name": "BaseBdev2", 00:16:18.023 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:18.023 "is_configured": true, 00:16:18.023 "data_offset": 0, 00:16:18.023 "data_size": 65536 00:16:18.023 }, 00:16:18.023 { 00:16:18.023 "name": "BaseBdev3", 00:16:18.023 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:18.023 "is_configured": true, 00:16:18.023 "data_offset": 0, 00:16:18.023 "data_size": 65536 00:16:18.023 } 00:16:18.023 ] 00:16:18.023 }' 00:16:18.023 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.023 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.023 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.023 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.023 12:47:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.958 "name": "raid_bdev1", 00:16:18.958 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:18.958 "strip_size_kb": 64, 00:16:18.958 "state": "online", 00:16:18.958 "raid_level": "raid5f", 00:16:18.958 "superblock": false, 00:16:18.958 "num_base_bdevs": 3, 00:16:18.958 "num_base_bdevs_discovered": 3, 00:16:18.958 "num_base_bdevs_operational": 3, 00:16:18.958 "process": { 00:16:18.958 "type": "rebuild", 00:16:18.958 "target": "spare", 00:16:18.958 "progress": { 00:16:18.958 "blocks": 92160, 00:16:18.958 "percent": 70 00:16:18.958 } 00:16:18.958 }, 00:16:18.958 "base_bdevs_list": [ 00:16:18.958 { 00:16:18.958 "name": "spare", 00:16:18.958 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:18.958 "is_configured": true, 00:16:18.958 "data_offset": 0, 00:16:18.958 "data_size": 65536 00:16:18.958 }, 00:16:18.958 { 00:16:18.958 "name": "BaseBdev2", 00:16:18.958 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:18.958 "is_configured": true, 00:16:18.958 "data_offset": 0, 00:16:18.958 "data_size": 65536 00:16:18.958 }, 00:16:18.958 { 00:16:18.958 "name": "BaseBdev3", 00:16:18.958 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:18.958 "is_configured": true, 00:16:18.958 "data_offset": 0, 00:16:18.958 "data_size": 65536 00:16:18.958 } 00:16:18.958 ] 00:16:18.958 }' 00:16:18.958 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.216 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.216 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.216 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.216 12:47:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.151 "name": "raid_bdev1", 00:16:20.151 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:20.151 "strip_size_kb": 64, 00:16:20.151 "state": "online", 00:16:20.151 "raid_level": "raid5f", 00:16:20.151 "superblock": false, 00:16:20.151 "num_base_bdevs": 3, 00:16:20.151 "num_base_bdevs_discovered": 3, 00:16:20.151 "num_base_bdevs_operational": 3, 00:16:20.151 "process": { 00:16:20.151 "type": "rebuild", 00:16:20.151 "target": "spare", 00:16:20.151 "progress": { 00:16:20.151 "blocks": 116736, 00:16:20.151 "percent": 89 00:16:20.151 } 00:16:20.151 }, 00:16:20.151 "base_bdevs_list": [ 00:16:20.151 { 00:16:20.151 "name": "spare", 00:16:20.151 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:20.151 "is_configured": true, 00:16:20.151 "data_offset": 0, 00:16:20.151 "data_size": 65536 00:16:20.151 }, 00:16:20.151 { 00:16:20.151 "name": "BaseBdev2", 00:16:20.151 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:20.151 "is_configured": true, 00:16:20.151 "data_offset": 0, 00:16:20.151 "data_size": 65536 00:16:20.151 }, 00:16:20.151 { 00:16:20.151 "name": "BaseBdev3", 00:16:20.151 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:20.151 "is_configured": true, 00:16:20.151 "data_offset": 0, 00:16:20.151 "data_size": 65536 00:16:20.151 } 00:16:20.151 ] 00:16:20.151 }' 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.151 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.409 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.409 12:47:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.017 [2024-11-06 12:47:09.368655] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:21.017 [2024-11-06 12:47:09.368786] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:21.017 [2024-11-06 12:47:09.368874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.276 "name": "raid_bdev1", 00:16:21.276 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:21.276 "strip_size_kb": 64, 00:16:21.276 "state": "online", 00:16:21.276 "raid_level": "raid5f", 00:16:21.276 "superblock": false, 00:16:21.276 "num_base_bdevs": 3, 00:16:21.276 "num_base_bdevs_discovered": 3, 00:16:21.276 "num_base_bdevs_operational": 3, 00:16:21.276 "base_bdevs_list": [ 00:16:21.276 { 00:16:21.276 "name": "spare", 00:16:21.276 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:21.276 "is_configured": true, 00:16:21.276 "data_offset": 0, 00:16:21.276 "data_size": 65536 00:16:21.276 }, 00:16:21.276 { 00:16:21.276 "name": "BaseBdev2", 00:16:21.276 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:21.276 "is_configured": true, 00:16:21.276 "data_offset": 0, 00:16:21.276 "data_size": 65536 00:16:21.276 }, 00:16:21.276 { 00:16:21.276 "name": "BaseBdev3", 00:16:21.276 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:21.276 "is_configured": true, 00:16:21.276 "data_offset": 0, 00:16:21.276 "data_size": 65536 00:16:21.276 } 00:16:21.276 ] 00:16:21.276 }' 00:16:21.276 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.533 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:21.534 12:47:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.534 "name": "raid_bdev1", 00:16:21.534 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:21.534 "strip_size_kb": 64, 00:16:21.534 "state": "online", 00:16:21.534 "raid_level": "raid5f", 00:16:21.534 "superblock": false, 00:16:21.534 "num_base_bdevs": 3, 00:16:21.534 "num_base_bdevs_discovered": 3, 00:16:21.534 "num_base_bdevs_operational": 3, 00:16:21.534 "base_bdevs_list": [ 00:16:21.534 { 00:16:21.534 "name": "spare", 00:16:21.534 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:21.534 "is_configured": true, 00:16:21.534 "data_offset": 0, 00:16:21.534 "data_size": 65536 00:16:21.534 }, 00:16:21.534 { 00:16:21.534 "name": "BaseBdev2", 00:16:21.534 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:21.534 "is_configured": true, 00:16:21.534 "data_offset": 0, 00:16:21.534 "data_size": 65536 00:16:21.534 }, 00:16:21.534 { 00:16:21.534 "name": "BaseBdev3", 00:16:21.534 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:21.534 "is_configured": true, 00:16:21.534 "data_offset": 0, 00:16:21.534 "data_size": 65536 00:16:21.534 } 00:16:21.534 ] 00:16:21.534 }' 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.534 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.792 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.792 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.792 "name": "raid_bdev1", 00:16:21.792 "uuid": "7c1bd729-1047-4ba4-8e33-ac5742e3da29", 00:16:21.792 "strip_size_kb": 64, 00:16:21.792 "state": "online", 00:16:21.792 "raid_level": "raid5f", 00:16:21.792 "superblock": false, 00:16:21.792 "num_base_bdevs": 3, 00:16:21.792 "num_base_bdevs_discovered": 3, 00:16:21.792 "num_base_bdevs_operational": 3, 00:16:21.792 "base_bdevs_list": [ 00:16:21.792 { 00:16:21.792 "name": "spare", 00:16:21.792 "uuid": "c6317203-a4c0-56e3-b360-c4b0c3c43669", 00:16:21.792 "is_configured": true, 00:16:21.792 "data_offset": 0, 00:16:21.792 "data_size": 65536 00:16:21.792 }, 00:16:21.792 { 00:16:21.792 "name": "BaseBdev2", 00:16:21.792 "uuid": "2a3569d0-14a0-5114-abf8-840d72221755", 00:16:21.792 "is_configured": true, 00:16:21.792 "data_offset": 0, 00:16:21.792 "data_size": 65536 00:16:21.792 }, 00:16:21.792 { 00:16:21.792 "name": "BaseBdev3", 00:16:21.792 "uuid": "e624f382-6810-5192-9720-4c3be1c79a42", 00:16:21.792 "is_configured": true, 00:16:21.792 "data_offset": 0, 00:16:21.792 "data_size": 65536 00:16:21.792 } 00:16:21.792 ] 00:16:21.792 }' 00:16:21.792 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.792 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 [2024-11-06 12:47:10.711130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.358 [2024-11-06 12:47:10.711173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.358 [2024-11-06 12:47:10.711486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.358 [2024-11-06 12:47:10.711627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.358 [2024-11-06 12:47:10.711656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.358 12:47:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:22.617 /dev/nbd0 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.617 1+0 records in 00:16:22.617 1+0 records out 00:16:22.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383685 s, 10.7 MB/s 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.617 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:22.876 /dev/nbd1 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.876 1+0 records in 00:16:22.876 1+0 records out 00:16:22.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444855 s, 9.2 MB/s 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.876 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.134 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:23.392 12:47:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82027 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82027 ']' 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82027 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82027 00:16:23.670 killing process with pid 82027 00:16:23.670 Received shutdown signal, test time was about 60.000000 seconds 00:16:23.670 00:16:23.670 Latency(us) 00:16:23.670 [2024-11-06T12:47:12.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.670 [2024-11-06T12:47:12.327Z] =================================================================================================================== 00:16:23.670 [2024-11-06T12:47:12.327Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82027' 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82027 00:16:23.670 12:47:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82027 00:16:23.670 [2024-11-06 12:47:12.303034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.238 [2024-11-06 12:47:12.686957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.174 12:47:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:25.174 00:16:25.174 real 0m16.572s 00:16:25.174 user 0m21.220s 00:16:25.174 sys 0m2.027s 00:16:25.174 12:47:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:25.174 ************************************ 00:16:25.174 END TEST raid5f_rebuild_test 00:16:25.174 ************************************ 00:16:25.174 12:47:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.432 12:47:13 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:25.432 12:47:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:25.432 12:47:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:25.432 12:47:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.432 ************************************ 00:16:25.432 START TEST raid5f_rebuild_test_sb 00:16:25.432 ************************************ 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82476 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82476 00:16:25.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82476 ']' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.432 12:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.432 [2024-11-06 12:47:13.956435] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:16:25.432 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:25.432 Zero copy mechanism will not be used. 00:16:25.432 [2024-11-06 12:47:13.956643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82476 ] 00:16:25.691 [2024-11-06 12:47:14.135060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.691 [2024-11-06 12:47:14.283121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.949 [2024-11-06 12:47:14.507465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.949 [2024-11-06 12:47:14.507563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.517 12:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:26.517 12:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:26.517 12:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.517 12:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:26.517 12:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.517 12:47:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.517 BaseBdev1_malloc 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.517 [2024-11-06 12:47:15.051299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:26.517 [2024-11-06 12:47:15.051431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.517 [2024-11-06 12:47:15.051486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:26.517 [2024-11-06 12:47:15.051518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.517 [2024-11-06 12:47:15.054888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.517 [2024-11-06 12:47:15.054946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:26.517 BaseBdev1 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.517 BaseBdev2_malloc 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.517 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.518 [2024-11-06 12:47:15.112080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:26.518 [2024-11-06 12:47:15.112218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.518 [2024-11-06 12:47:15.112276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:26.518 [2024-11-06 12:47:15.112308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.518 [2024-11-06 12:47:15.115566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.518 [2024-11-06 12:47:15.115626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:26.518 BaseBdev2 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.518 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 BaseBdev3_malloc 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 [2024-11-06 12:47:15.187728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:26.777 [2024-11-06 12:47:15.187835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.777 [2024-11-06 12:47:15.187893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:26.777 [2024-11-06 12:47:15.187926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.777 [2024-11-06 12:47:15.191338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.777 [2024-11-06 12:47:15.191396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:26.777 BaseBdev3 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 spare_malloc 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 spare_delay 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 [2024-11-06 12:47:15.260549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:26.777 [2024-11-06 12:47:15.260647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.777 [2024-11-06 12:47:15.260701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:26.777 [2024-11-06 12:47:15.260732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.777 [2024-11-06 12:47:15.264210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.777 [2024-11-06 12:47:15.264284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:26.777 spare 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 [2024-11-06 12:47:15.272837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.777 [2024-11-06 12:47:15.275554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.777 [2024-11-06 12:47:15.275661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.777 [2024-11-06 12:47:15.275946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:26.777 [2024-11-06 12:47:15.275976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:26.777 [2024-11-06 12:47:15.276392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.777 [2024-11-06 12:47:15.281824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:26.777 [2024-11-06 12:47:15.281863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:26.777 [2024-11-06 12:47:15.282255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.777 "name": "raid_bdev1", 00:16:26.777 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:26.777 "strip_size_kb": 64, 00:16:26.777 "state": "online", 00:16:26.777 "raid_level": "raid5f", 00:16:26.777 "superblock": true, 00:16:26.777 "num_base_bdevs": 3, 00:16:26.777 "num_base_bdevs_discovered": 3, 00:16:26.777 "num_base_bdevs_operational": 3, 00:16:26.777 "base_bdevs_list": [ 00:16:26.777 { 00:16:26.777 "name": "BaseBdev1", 00:16:26.777 "uuid": "88dfa9ec-b672-5b1b-a7ab-4468a774f247", 00:16:26.777 "is_configured": true, 00:16:26.777 "data_offset": 2048, 00:16:26.777 "data_size": 63488 00:16:26.777 }, 00:16:26.777 { 00:16:26.777 "name": "BaseBdev2", 00:16:26.777 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:26.777 "is_configured": true, 00:16:26.777 "data_offset": 2048, 00:16:26.777 "data_size": 63488 00:16:26.777 }, 00:16:26.777 { 00:16:26.777 "name": "BaseBdev3", 00:16:26.777 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:26.777 "is_configured": true, 00:16:26.777 "data_offset": 2048, 00:16:26.777 "data_size": 63488 00:16:26.777 } 00:16:26.777 ] 00:16:26.777 }' 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.777 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 [2024-11-06 12:47:15.776888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.345 12:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:27.603 [2024-11-06 12:47:16.160840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:27.603 /dev/nbd0 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.603 1+0 records in 00:16:27.603 1+0 records out 00:16:27.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473634 s, 8.6 MB/s 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:27.603 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:27.604 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.604 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.604 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:27.604 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:27.604 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:27.604 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:28.170 496+0 records in 00:16:28.170 496+0 records out 00:16:28.170 65011712 bytes (65 MB, 62 MiB) copied, 0.493403 s, 132 MB/s 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.170 12:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.429 [2024-11-06 12:47:17.063689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.429 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.429 [2024-11-06 12:47:17.082070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.686 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.686 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:28.686 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.687 "name": "raid_bdev1", 00:16:28.687 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:28.687 "strip_size_kb": 64, 00:16:28.687 "state": "online", 00:16:28.687 "raid_level": "raid5f", 00:16:28.687 "superblock": true, 00:16:28.687 "num_base_bdevs": 3, 00:16:28.687 "num_base_bdevs_discovered": 2, 00:16:28.687 "num_base_bdevs_operational": 2, 00:16:28.687 "base_bdevs_list": [ 00:16:28.687 { 00:16:28.687 "name": null, 00:16:28.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.687 "is_configured": false, 00:16:28.687 "data_offset": 0, 00:16:28.687 "data_size": 63488 00:16:28.687 }, 00:16:28.687 { 00:16:28.687 "name": "BaseBdev2", 00:16:28.687 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:28.687 "is_configured": true, 00:16:28.687 "data_offset": 2048, 00:16:28.687 "data_size": 63488 00:16:28.687 }, 00:16:28.687 { 00:16:28.687 "name": "BaseBdev3", 00:16:28.687 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:28.687 "is_configured": true, 00:16:28.687 "data_offset": 2048, 00:16:28.687 "data_size": 63488 00:16:28.687 } 00:16:28.687 ] 00:16:28.687 }' 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.687 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.945 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.945 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.945 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.203 [2024-11-06 12:47:17.602238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.203 [2024-11-06 12:47:17.619192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:29.203 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.203 12:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:29.203 [2024-11-06 12:47:17.627019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.184 "name": "raid_bdev1", 00:16:30.184 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:30.184 "strip_size_kb": 64, 00:16:30.184 "state": "online", 00:16:30.184 "raid_level": "raid5f", 00:16:30.184 "superblock": true, 00:16:30.184 "num_base_bdevs": 3, 00:16:30.184 "num_base_bdevs_discovered": 3, 00:16:30.184 "num_base_bdevs_operational": 3, 00:16:30.184 "process": { 00:16:30.184 "type": "rebuild", 00:16:30.184 "target": "spare", 00:16:30.184 "progress": { 00:16:30.184 "blocks": 18432, 00:16:30.184 "percent": 14 00:16:30.184 } 00:16:30.184 }, 00:16:30.184 "base_bdevs_list": [ 00:16:30.184 { 00:16:30.184 "name": "spare", 00:16:30.184 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:30.184 "is_configured": true, 00:16:30.184 "data_offset": 2048, 00:16:30.184 "data_size": 63488 00:16:30.184 }, 00:16:30.184 { 00:16:30.184 "name": "BaseBdev2", 00:16:30.184 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:30.184 "is_configured": true, 00:16:30.184 "data_offset": 2048, 00:16:30.184 "data_size": 63488 00:16:30.184 }, 00:16:30.184 { 00:16:30.184 "name": "BaseBdev3", 00:16:30.184 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:30.184 "is_configured": true, 00:16:30.184 "data_offset": 2048, 00:16:30.184 "data_size": 63488 00:16:30.184 } 00:16:30.184 ] 00:16:30.184 }' 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.184 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.184 [2024-11-06 12:47:18.801317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:30.442 [2024-11-06 12:47:18.847143] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:30.442 [2024-11-06 12:47:18.847301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.442 [2024-11-06 12:47:18.847359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:30.442 [2024-11-06 12:47:18.847375] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.442 "name": "raid_bdev1", 00:16:30.442 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:30.442 "strip_size_kb": 64, 00:16:30.442 "state": "online", 00:16:30.442 "raid_level": "raid5f", 00:16:30.442 "superblock": true, 00:16:30.442 "num_base_bdevs": 3, 00:16:30.442 "num_base_bdevs_discovered": 2, 00:16:30.442 "num_base_bdevs_operational": 2, 00:16:30.442 "base_bdevs_list": [ 00:16:30.442 { 00:16:30.442 "name": null, 00:16:30.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.442 "is_configured": false, 00:16:30.442 "data_offset": 0, 00:16:30.442 "data_size": 63488 00:16:30.442 }, 00:16:30.442 { 00:16:30.442 "name": "BaseBdev2", 00:16:30.442 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:30.442 "is_configured": true, 00:16:30.442 "data_offset": 2048, 00:16:30.442 "data_size": 63488 00:16:30.442 }, 00:16:30.442 { 00:16:30.442 "name": "BaseBdev3", 00:16:30.442 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:30.442 "is_configured": true, 00:16:30.442 "data_offset": 2048, 00:16:30.442 "data_size": 63488 00:16:30.442 } 00:16:30.442 ] 00:16:30.442 }' 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.442 12:47:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.008 "name": "raid_bdev1", 00:16:31.008 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:31.008 "strip_size_kb": 64, 00:16:31.008 "state": "online", 00:16:31.008 "raid_level": "raid5f", 00:16:31.008 "superblock": true, 00:16:31.008 "num_base_bdevs": 3, 00:16:31.008 "num_base_bdevs_discovered": 2, 00:16:31.008 "num_base_bdevs_operational": 2, 00:16:31.008 "base_bdevs_list": [ 00:16:31.008 { 00:16:31.008 "name": null, 00:16:31.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.008 "is_configured": false, 00:16:31.008 "data_offset": 0, 00:16:31.008 "data_size": 63488 00:16:31.008 }, 00:16:31.008 { 00:16:31.008 "name": "BaseBdev2", 00:16:31.008 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:31.008 "is_configured": true, 00:16:31.008 "data_offset": 2048, 00:16:31.008 "data_size": 63488 00:16:31.008 }, 00:16:31.008 { 00:16:31.008 "name": "BaseBdev3", 00:16:31.008 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:31.008 "is_configured": true, 00:16:31.008 "data_offset": 2048, 00:16:31.008 "data_size": 63488 00:16:31.008 } 00:16:31.008 ] 00:16:31.008 }' 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.008 [2024-11-06 12:47:19.585916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.008 [2024-11-06 12:47:19.603138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.008 12:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:31.008 [2024-11-06 12:47:19.610912] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.383 "name": "raid_bdev1", 00:16:32.383 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:32.383 "strip_size_kb": 64, 00:16:32.383 "state": "online", 00:16:32.383 "raid_level": "raid5f", 00:16:32.383 "superblock": true, 00:16:32.383 "num_base_bdevs": 3, 00:16:32.383 "num_base_bdevs_discovered": 3, 00:16:32.383 "num_base_bdevs_operational": 3, 00:16:32.383 "process": { 00:16:32.383 "type": "rebuild", 00:16:32.383 "target": "spare", 00:16:32.383 "progress": { 00:16:32.383 "blocks": 18432, 00:16:32.383 "percent": 14 00:16:32.383 } 00:16:32.383 }, 00:16:32.383 "base_bdevs_list": [ 00:16:32.383 { 00:16:32.383 "name": "spare", 00:16:32.383 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:32.383 "is_configured": true, 00:16:32.383 "data_offset": 2048, 00:16:32.383 "data_size": 63488 00:16:32.383 }, 00:16:32.383 { 00:16:32.383 "name": "BaseBdev2", 00:16:32.383 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:32.383 "is_configured": true, 00:16:32.383 "data_offset": 2048, 00:16:32.383 "data_size": 63488 00:16:32.383 }, 00:16:32.383 { 00:16:32.383 "name": "BaseBdev3", 00:16:32.383 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:32.383 "is_configured": true, 00:16:32.383 "data_offset": 2048, 00:16:32.383 "data_size": 63488 00:16:32.383 } 00:16:32.383 ] 00:16:32.383 }' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:32.383 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=614 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.383 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.383 "name": "raid_bdev1", 00:16:32.383 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:32.383 "strip_size_kb": 64, 00:16:32.383 "state": "online", 00:16:32.383 "raid_level": "raid5f", 00:16:32.383 "superblock": true, 00:16:32.383 "num_base_bdevs": 3, 00:16:32.383 "num_base_bdevs_discovered": 3, 00:16:32.383 "num_base_bdevs_operational": 3, 00:16:32.383 "process": { 00:16:32.383 "type": "rebuild", 00:16:32.383 "target": "spare", 00:16:32.383 "progress": { 00:16:32.383 "blocks": 22528, 00:16:32.383 "percent": 17 00:16:32.383 } 00:16:32.384 }, 00:16:32.384 "base_bdevs_list": [ 00:16:32.384 { 00:16:32.384 "name": "spare", 00:16:32.384 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:32.384 "is_configured": true, 00:16:32.384 "data_offset": 2048, 00:16:32.384 "data_size": 63488 00:16:32.384 }, 00:16:32.384 { 00:16:32.384 "name": "BaseBdev2", 00:16:32.384 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:32.384 "is_configured": true, 00:16:32.384 "data_offset": 2048, 00:16:32.384 "data_size": 63488 00:16:32.384 }, 00:16:32.384 { 00:16:32.384 "name": "BaseBdev3", 00:16:32.384 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:32.384 "is_configured": true, 00:16:32.384 "data_offset": 2048, 00:16:32.384 "data_size": 63488 00:16:32.384 } 00:16:32.384 ] 00:16:32.384 }' 00:16:32.384 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.384 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.384 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.384 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.384 12:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.322 12:47:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.580 12:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.580 "name": "raid_bdev1", 00:16:33.580 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:33.580 "strip_size_kb": 64, 00:16:33.580 "state": "online", 00:16:33.580 "raid_level": "raid5f", 00:16:33.580 "superblock": true, 00:16:33.580 "num_base_bdevs": 3, 00:16:33.580 "num_base_bdevs_discovered": 3, 00:16:33.580 "num_base_bdevs_operational": 3, 00:16:33.580 "process": { 00:16:33.580 "type": "rebuild", 00:16:33.580 "target": "spare", 00:16:33.580 "progress": { 00:16:33.580 "blocks": 47104, 00:16:33.580 "percent": 37 00:16:33.580 } 00:16:33.580 }, 00:16:33.580 "base_bdevs_list": [ 00:16:33.580 { 00:16:33.580 "name": "spare", 00:16:33.580 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:33.580 "is_configured": true, 00:16:33.580 "data_offset": 2048, 00:16:33.580 "data_size": 63488 00:16:33.580 }, 00:16:33.580 { 00:16:33.580 "name": "BaseBdev2", 00:16:33.580 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:33.580 "is_configured": true, 00:16:33.580 "data_offset": 2048, 00:16:33.580 "data_size": 63488 00:16:33.580 }, 00:16:33.580 { 00:16:33.580 "name": "BaseBdev3", 00:16:33.580 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:33.580 "is_configured": true, 00:16:33.580 "data_offset": 2048, 00:16:33.580 "data_size": 63488 00:16:33.580 } 00:16:33.580 ] 00:16:33.580 }' 00:16:33.580 12:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.580 12:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.580 12:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.580 12:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.580 12:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.516 "name": "raid_bdev1", 00:16:34.516 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:34.516 "strip_size_kb": 64, 00:16:34.516 "state": "online", 00:16:34.516 "raid_level": "raid5f", 00:16:34.516 "superblock": true, 00:16:34.516 "num_base_bdevs": 3, 00:16:34.516 "num_base_bdevs_discovered": 3, 00:16:34.516 "num_base_bdevs_operational": 3, 00:16:34.516 "process": { 00:16:34.516 "type": "rebuild", 00:16:34.516 "target": "spare", 00:16:34.516 "progress": { 00:16:34.516 "blocks": 69632, 00:16:34.516 "percent": 54 00:16:34.516 } 00:16:34.516 }, 00:16:34.516 "base_bdevs_list": [ 00:16:34.516 { 00:16:34.516 "name": "spare", 00:16:34.516 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:34.516 "is_configured": true, 00:16:34.516 "data_offset": 2048, 00:16:34.516 "data_size": 63488 00:16:34.516 }, 00:16:34.516 { 00:16:34.516 "name": "BaseBdev2", 00:16:34.516 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:34.516 "is_configured": true, 00:16:34.516 "data_offset": 2048, 00:16:34.516 "data_size": 63488 00:16:34.516 }, 00:16:34.516 { 00:16:34.516 "name": "BaseBdev3", 00:16:34.516 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:34.516 "is_configured": true, 00:16:34.516 "data_offset": 2048, 00:16:34.516 "data_size": 63488 00:16:34.516 } 00:16:34.516 ] 00:16:34.516 }' 00:16:34.516 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.775 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.775 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.775 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.775 12:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.736 "name": "raid_bdev1", 00:16:35.736 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:35.736 "strip_size_kb": 64, 00:16:35.736 "state": "online", 00:16:35.736 "raid_level": "raid5f", 00:16:35.736 "superblock": true, 00:16:35.736 "num_base_bdevs": 3, 00:16:35.736 "num_base_bdevs_discovered": 3, 00:16:35.736 "num_base_bdevs_operational": 3, 00:16:35.736 "process": { 00:16:35.736 "type": "rebuild", 00:16:35.736 "target": "spare", 00:16:35.736 "progress": { 00:16:35.736 "blocks": 94208, 00:16:35.736 "percent": 74 00:16:35.736 } 00:16:35.736 }, 00:16:35.736 "base_bdevs_list": [ 00:16:35.736 { 00:16:35.736 "name": "spare", 00:16:35.736 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 2048, 00:16:35.736 "data_size": 63488 00:16:35.736 }, 00:16:35.736 { 00:16:35.736 "name": "BaseBdev2", 00:16:35.736 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 2048, 00:16:35.736 "data_size": 63488 00:16:35.736 }, 00:16:35.736 { 00:16:35.736 "name": "BaseBdev3", 00:16:35.736 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 2048, 00:16:35.736 "data_size": 63488 00:16:35.736 } 00:16:35.736 ] 00:16:35.736 }' 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.736 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.994 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.994 12:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.928 "name": "raid_bdev1", 00:16:36.928 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:36.928 "strip_size_kb": 64, 00:16:36.928 "state": "online", 00:16:36.928 "raid_level": "raid5f", 00:16:36.928 "superblock": true, 00:16:36.928 "num_base_bdevs": 3, 00:16:36.928 "num_base_bdevs_discovered": 3, 00:16:36.928 "num_base_bdevs_operational": 3, 00:16:36.928 "process": { 00:16:36.928 "type": "rebuild", 00:16:36.928 "target": "spare", 00:16:36.928 "progress": { 00:16:36.928 "blocks": 116736, 00:16:36.928 "percent": 91 00:16:36.928 } 00:16:36.928 }, 00:16:36.928 "base_bdevs_list": [ 00:16:36.928 { 00:16:36.928 "name": "spare", 00:16:36.928 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:36.928 "is_configured": true, 00:16:36.928 "data_offset": 2048, 00:16:36.928 "data_size": 63488 00:16:36.928 }, 00:16:36.928 { 00:16:36.928 "name": "BaseBdev2", 00:16:36.928 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:36.928 "is_configured": true, 00:16:36.928 "data_offset": 2048, 00:16:36.928 "data_size": 63488 00:16:36.928 }, 00:16:36.928 { 00:16:36.928 "name": "BaseBdev3", 00:16:36.928 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:36.928 "is_configured": true, 00:16:36.928 "data_offset": 2048, 00:16:36.928 "data_size": 63488 00:16:36.928 } 00:16:36.928 ] 00:16:36.928 }' 00:16:36.928 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.929 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.929 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.187 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.187 12:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.445 [2024-11-06 12:47:25.906716] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:37.445 [2024-11-06 12:47:25.906850] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:37.445 [2024-11-06 12:47:25.907055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.011 "name": "raid_bdev1", 00:16:38.011 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:38.011 "strip_size_kb": 64, 00:16:38.011 "state": "online", 00:16:38.011 "raid_level": "raid5f", 00:16:38.011 "superblock": true, 00:16:38.011 "num_base_bdevs": 3, 00:16:38.011 "num_base_bdevs_discovered": 3, 00:16:38.011 "num_base_bdevs_operational": 3, 00:16:38.011 "base_bdevs_list": [ 00:16:38.011 { 00:16:38.011 "name": "spare", 00:16:38.011 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:38.011 "is_configured": true, 00:16:38.011 "data_offset": 2048, 00:16:38.011 "data_size": 63488 00:16:38.011 }, 00:16:38.011 { 00:16:38.011 "name": "BaseBdev2", 00:16:38.011 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:38.011 "is_configured": true, 00:16:38.011 "data_offset": 2048, 00:16:38.011 "data_size": 63488 00:16:38.011 }, 00:16:38.011 { 00:16:38.011 "name": "BaseBdev3", 00:16:38.011 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:38.011 "is_configured": true, 00:16:38.011 "data_offset": 2048, 00:16:38.011 "data_size": 63488 00:16:38.011 } 00:16:38.011 ] 00:16:38.011 }' 00:16:38.011 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.269 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.270 "name": "raid_bdev1", 00:16:38.270 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:38.270 "strip_size_kb": 64, 00:16:38.270 "state": "online", 00:16:38.270 "raid_level": "raid5f", 00:16:38.270 "superblock": true, 00:16:38.270 "num_base_bdevs": 3, 00:16:38.270 "num_base_bdevs_discovered": 3, 00:16:38.270 "num_base_bdevs_operational": 3, 00:16:38.270 "base_bdevs_list": [ 00:16:38.270 { 00:16:38.270 "name": "spare", 00:16:38.270 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:38.270 "is_configured": true, 00:16:38.270 "data_offset": 2048, 00:16:38.270 "data_size": 63488 00:16:38.270 }, 00:16:38.270 { 00:16:38.270 "name": "BaseBdev2", 00:16:38.270 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:38.270 "is_configured": true, 00:16:38.270 "data_offset": 2048, 00:16:38.270 "data_size": 63488 00:16:38.270 }, 00:16:38.270 { 00:16:38.270 "name": "BaseBdev3", 00:16:38.270 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:38.270 "is_configured": true, 00:16:38.270 "data_offset": 2048, 00:16:38.270 "data_size": 63488 00:16:38.270 } 00:16:38.270 ] 00:16:38.270 }' 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.270 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.528 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.528 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.528 "name": "raid_bdev1", 00:16:38.528 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:38.528 "strip_size_kb": 64, 00:16:38.528 "state": "online", 00:16:38.528 "raid_level": "raid5f", 00:16:38.528 "superblock": true, 00:16:38.528 "num_base_bdevs": 3, 00:16:38.528 "num_base_bdevs_discovered": 3, 00:16:38.528 "num_base_bdevs_operational": 3, 00:16:38.528 "base_bdevs_list": [ 00:16:38.528 { 00:16:38.528 "name": "spare", 00:16:38.528 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:38.528 "is_configured": true, 00:16:38.528 "data_offset": 2048, 00:16:38.528 "data_size": 63488 00:16:38.528 }, 00:16:38.528 { 00:16:38.528 "name": "BaseBdev2", 00:16:38.528 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:38.528 "is_configured": true, 00:16:38.528 "data_offset": 2048, 00:16:38.528 "data_size": 63488 00:16:38.528 }, 00:16:38.528 { 00:16:38.528 "name": "BaseBdev3", 00:16:38.528 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:38.528 "is_configured": true, 00:16:38.528 "data_offset": 2048, 00:16:38.528 "data_size": 63488 00:16:38.528 } 00:16:38.528 ] 00:16:38.528 }' 00:16:38.528 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.528 12:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 [2024-11-06 12:47:27.416734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.787 [2024-11-06 12:47:27.416932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.787 [2024-11-06 12:47:27.417183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.787 [2024-11-06 12:47:27.417332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.787 [2024-11-06 12:47:27.417361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:38.787 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.050 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:39.308 /dev/nbd0 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.308 1+0 records in 00:16:39.308 1+0 records out 00:16:39.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002916 s, 14.0 MB/s 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.308 12:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:39.566 /dev/nbd1 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.566 1+0 records in 00:16:39.566 1+0 records out 00:16:39.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377365 s, 10.9 MB/s 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.566 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.825 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.086 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.344 [2024-11-06 12:47:28.954733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:40.344 [2024-11-06 12:47:28.954828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.344 [2024-11-06 12:47:28.954864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:40.344 [2024-11-06 12:47:28.954884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.344 [2024-11-06 12:47:28.958030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.344 [2024-11-06 12:47:28.958085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:40.344 [2024-11-06 12:47:28.958232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:40.344 [2024-11-06 12:47:28.958317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.344 [2024-11-06 12:47:28.958502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.344 [2024-11-06 12:47:28.958667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.344 spare 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:40.344 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.345 12:47:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.602 [2024-11-06 12:47:29.058824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:40.602 [2024-11-06 12:47:29.058935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:40.602 [2024-11-06 12:47:29.059556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:40.602 [2024-11-06 12:47:29.064851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:40.602 [2024-11-06 12:47:29.064889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:40.602 [2024-11-06 12:47:29.065252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.602 "name": "raid_bdev1", 00:16:40.602 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:40.602 "strip_size_kb": 64, 00:16:40.602 "state": "online", 00:16:40.602 "raid_level": "raid5f", 00:16:40.602 "superblock": true, 00:16:40.602 "num_base_bdevs": 3, 00:16:40.602 "num_base_bdevs_discovered": 3, 00:16:40.602 "num_base_bdevs_operational": 3, 00:16:40.602 "base_bdevs_list": [ 00:16:40.602 { 00:16:40.602 "name": "spare", 00:16:40.602 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:40.602 "is_configured": true, 00:16:40.602 "data_offset": 2048, 00:16:40.602 "data_size": 63488 00:16:40.602 }, 00:16:40.602 { 00:16:40.602 "name": "BaseBdev2", 00:16:40.602 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:40.602 "is_configured": true, 00:16:40.602 "data_offset": 2048, 00:16:40.602 "data_size": 63488 00:16:40.602 }, 00:16:40.602 { 00:16:40.602 "name": "BaseBdev3", 00:16:40.602 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:40.602 "is_configured": true, 00:16:40.602 "data_offset": 2048, 00:16:40.602 "data_size": 63488 00:16:40.602 } 00:16:40.602 ] 00:16:40.602 }' 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.602 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.217 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.217 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.217 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.217 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.217 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.217 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.218 "name": "raid_bdev1", 00:16:41.218 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:41.218 "strip_size_kb": 64, 00:16:41.218 "state": "online", 00:16:41.218 "raid_level": "raid5f", 00:16:41.218 "superblock": true, 00:16:41.218 "num_base_bdevs": 3, 00:16:41.218 "num_base_bdevs_discovered": 3, 00:16:41.218 "num_base_bdevs_operational": 3, 00:16:41.218 "base_bdevs_list": [ 00:16:41.218 { 00:16:41.218 "name": "spare", 00:16:41.218 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:41.218 "is_configured": true, 00:16:41.218 "data_offset": 2048, 00:16:41.218 "data_size": 63488 00:16:41.218 }, 00:16:41.218 { 00:16:41.218 "name": "BaseBdev2", 00:16:41.218 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:41.218 "is_configured": true, 00:16:41.218 "data_offset": 2048, 00:16:41.218 "data_size": 63488 00:16:41.218 }, 00:16:41.218 { 00:16:41.218 "name": "BaseBdev3", 00:16:41.218 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:41.218 "is_configured": true, 00:16:41.218 "data_offset": 2048, 00:16:41.218 "data_size": 63488 00:16:41.218 } 00:16:41.218 ] 00:16:41.218 }' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.218 [2024-11-06 12:47:29.795684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.218 "name": "raid_bdev1", 00:16:41.218 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:41.218 "strip_size_kb": 64, 00:16:41.218 "state": "online", 00:16:41.218 "raid_level": "raid5f", 00:16:41.218 "superblock": true, 00:16:41.218 "num_base_bdevs": 3, 00:16:41.218 "num_base_bdevs_discovered": 2, 00:16:41.218 "num_base_bdevs_operational": 2, 00:16:41.218 "base_bdevs_list": [ 00:16:41.218 { 00:16:41.218 "name": null, 00:16:41.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.218 "is_configured": false, 00:16:41.218 "data_offset": 0, 00:16:41.218 "data_size": 63488 00:16:41.218 }, 00:16:41.218 { 00:16:41.218 "name": "BaseBdev2", 00:16:41.218 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:41.218 "is_configured": true, 00:16:41.218 "data_offset": 2048, 00:16:41.218 "data_size": 63488 00:16:41.218 }, 00:16:41.218 { 00:16:41.218 "name": "BaseBdev3", 00:16:41.218 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:41.218 "is_configured": true, 00:16:41.218 "data_offset": 2048, 00:16:41.218 "data_size": 63488 00:16:41.218 } 00:16:41.218 ] 00:16:41.218 }' 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.218 12:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 12:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.784 12:47:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.784 12:47:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 [2024-11-06 12:47:30.291854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.784 [2024-11-06 12:47:30.292135] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.784 [2024-11-06 12:47:30.292164] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.784 [2024-11-06 12:47:30.292232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.784 [2024-11-06 12:47:30.307829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:41.784 12:47:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.784 12:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:41.784 [2024-11-06 12:47:30.315607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.720 "name": "raid_bdev1", 00:16:42.720 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:42.720 "strip_size_kb": 64, 00:16:42.720 "state": "online", 00:16:42.720 "raid_level": "raid5f", 00:16:42.720 "superblock": true, 00:16:42.720 "num_base_bdevs": 3, 00:16:42.720 "num_base_bdevs_discovered": 3, 00:16:42.720 "num_base_bdevs_operational": 3, 00:16:42.720 "process": { 00:16:42.720 "type": "rebuild", 00:16:42.720 "target": "spare", 00:16:42.720 "progress": { 00:16:42.720 "blocks": 18432, 00:16:42.720 "percent": 14 00:16:42.720 } 00:16:42.720 }, 00:16:42.720 "base_bdevs_list": [ 00:16:42.720 { 00:16:42.720 "name": "spare", 00:16:42.720 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:42.720 "is_configured": true, 00:16:42.720 "data_offset": 2048, 00:16:42.720 "data_size": 63488 00:16:42.720 }, 00:16:42.720 { 00:16:42.720 "name": "BaseBdev2", 00:16:42.720 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:42.720 "is_configured": true, 00:16:42.720 "data_offset": 2048, 00:16:42.720 "data_size": 63488 00:16:42.720 }, 00:16:42.720 { 00:16:42.720 "name": "BaseBdev3", 00:16:42.720 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:42.720 "is_configured": true, 00:16:42.720 "data_offset": 2048, 00:16:42.720 "data_size": 63488 00:16:42.720 } 00:16:42.720 ] 00:16:42.720 }' 00:16:42.720 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.978 [2024-11-06 12:47:31.482489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.978 [2024-11-06 12:47:31.534746] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.978 [2024-11-06 12:47:31.534904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.978 [2024-11-06 12:47:31.534943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.978 [2024-11-06 12:47:31.534962] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.978 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.979 "name": "raid_bdev1", 00:16:42.979 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:42.979 "strip_size_kb": 64, 00:16:42.979 "state": "online", 00:16:42.979 "raid_level": "raid5f", 00:16:42.979 "superblock": true, 00:16:42.979 "num_base_bdevs": 3, 00:16:42.979 "num_base_bdevs_discovered": 2, 00:16:42.979 "num_base_bdevs_operational": 2, 00:16:42.979 "base_bdevs_list": [ 00:16:42.979 { 00:16:42.979 "name": null, 00:16:42.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.979 "is_configured": false, 00:16:42.979 "data_offset": 0, 00:16:42.979 "data_size": 63488 00:16:42.979 }, 00:16:42.979 { 00:16:42.979 "name": "BaseBdev2", 00:16:42.979 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:42.979 "is_configured": true, 00:16:42.979 "data_offset": 2048, 00:16:42.979 "data_size": 63488 00:16:42.979 }, 00:16:42.979 { 00:16:42.979 "name": "BaseBdev3", 00:16:42.979 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:42.979 "is_configured": true, 00:16:42.979 "data_offset": 2048, 00:16:42.979 "data_size": 63488 00:16:42.979 } 00:16:42.979 ] 00:16:42.979 }' 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.979 12:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.546 12:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.546 12:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.546 12:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.546 [2024-11-06 12:47:32.086451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.546 [2024-11-06 12:47:32.086566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.546 [2024-11-06 12:47:32.086605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:43.546 [2024-11-06 12:47:32.086637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.546 [2024-11-06 12:47:32.087386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.546 [2024-11-06 12:47:32.087437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.546 [2024-11-06 12:47:32.087624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:43.546 [2024-11-06 12:47:32.087678] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.546 [2024-11-06 12:47:32.087703] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.546 [2024-11-06 12:47:32.087771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.546 [2024-11-06 12:47:32.103482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:43.546 spare 00:16:43.546 12:47:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.546 12:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:43.546 [2024-11-06 12:47:32.111149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.482 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.740 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.740 "name": "raid_bdev1", 00:16:44.740 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:44.740 "strip_size_kb": 64, 00:16:44.740 "state": "online", 00:16:44.740 "raid_level": "raid5f", 00:16:44.740 "superblock": true, 00:16:44.740 "num_base_bdevs": 3, 00:16:44.740 "num_base_bdevs_discovered": 3, 00:16:44.740 "num_base_bdevs_operational": 3, 00:16:44.740 "process": { 00:16:44.740 "type": "rebuild", 00:16:44.740 "target": "spare", 00:16:44.740 "progress": { 00:16:44.740 "blocks": 18432, 00:16:44.740 "percent": 14 00:16:44.740 } 00:16:44.740 }, 00:16:44.740 "base_bdevs_list": [ 00:16:44.741 { 00:16:44.741 "name": "spare", 00:16:44.741 "uuid": "30823348-a273-511b-a1a6-1cdb1cc6588b", 00:16:44.741 "is_configured": true, 00:16:44.741 "data_offset": 2048, 00:16:44.741 "data_size": 63488 00:16:44.741 }, 00:16:44.741 { 00:16:44.741 "name": "BaseBdev2", 00:16:44.741 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:44.741 "is_configured": true, 00:16:44.741 "data_offset": 2048, 00:16:44.741 "data_size": 63488 00:16:44.741 }, 00:16:44.741 { 00:16:44.741 "name": "BaseBdev3", 00:16:44.741 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:44.741 "is_configured": true, 00:16:44.741 "data_offset": 2048, 00:16:44.741 "data_size": 63488 00:16:44.741 } 00:16:44.741 ] 00:16:44.741 }' 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.741 [2024-11-06 12:47:33.285587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.741 [2024-11-06 12:47:33.330011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.741 [2024-11-06 12:47:33.330101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.741 [2024-11-06 12:47:33.330157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.741 [2024-11-06 12:47:33.330174] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.741 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.000 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.000 "name": "raid_bdev1", 00:16:45.000 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:45.000 "strip_size_kb": 64, 00:16:45.000 "state": "online", 00:16:45.000 "raid_level": "raid5f", 00:16:45.000 "superblock": true, 00:16:45.000 "num_base_bdevs": 3, 00:16:45.000 "num_base_bdevs_discovered": 2, 00:16:45.000 "num_base_bdevs_operational": 2, 00:16:45.000 "base_bdevs_list": [ 00:16:45.000 { 00:16:45.000 "name": null, 00:16:45.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.000 "is_configured": false, 00:16:45.000 "data_offset": 0, 00:16:45.000 "data_size": 63488 00:16:45.000 }, 00:16:45.000 { 00:16:45.000 "name": "BaseBdev2", 00:16:45.000 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:45.000 "is_configured": true, 00:16:45.000 "data_offset": 2048, 00:16:45.000 "data_size": 63488 00:16:45.000 }, 00:16:45.000 { 00:16:45.000 "name": "BaseBdev3", 00:16:45.000 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:45.000 "is_configured": true, 00:16:45.000 "data_offset": 2048, 00:16:45.000 "data_size": 63488 00:16:45.000 } 00:16:45.000 ] 00:16:45.000 }' 00:16:45.000 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.000 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.258 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.258 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.258 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.258 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.258 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.258 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.259 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.259 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.259 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.259 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.517 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.517 "name": "raid_bdev1", 00:16:45.517 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:45.517 "strip_size_kb": 64, 00:16:45.517 "state": "online", 00:16:45.517 "raid_level": "raid5f", 00:16:45.517 "superblock": true, 00:16:45.517 "num_base_bdevs": 3, 00:16:45.517 "num_base_bdevs_discovered": 2, 00:16:45.517 "num_base_bdevs_operational": 2, 00:16:45.517 "base_bdevs_list": [ 00:16:45.517 { 00:16:45.517 "name": null, 00:16:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.517 "is_configured": false, 00:16:45.517 "data_offset": 0, 00:16:45.517 "data_size": 63488 00:16:45.517 }, 00:16:45.517 { 00:16:45.517 "name": "BaseBdev2", 00:16:45.517 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:45.517 "is_configured": true, 00:16:45.517 "data_offset": 2048, 00:16:45.517 "data_size": 63488 00:16:45.517 }, 00:16:45.517 { 00:16:45.517 "name": "BaseBdev3", 00:16:45.517 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:45.517 "is_configured": true, 00:16:45.517 "data_offset": 2048, 00:16:45.517 "data_size": 63488 00:16:45.517 } 00:16:45.517 ] 00:16:45.517 }' 00:16:45.517 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.517 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.517 12:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.517 [2024-11-06 12:47:34.072234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.517 [2024-11-06 12:47:34.072309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.517 [2024-11-06 12:47:34.072348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:45.517 [2024-11-06 12:47:34.072365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.517 [2024-11-06 12:47:34.073039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.517 [2024-11-06 12:47:34.073076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.517 [2024-11-06 12:47:34.073267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:45.517 [2024-11-06 12:47:34.073315] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.517 [2024-11-06 12:47:34.073341] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:45.517 [2024-11-06 12:47:34.073356] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:45.517 BaseBdev1 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.517 12:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.452 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.712 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.712 "name": "raid_bdev1", 00:16:46.712 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:46.712 "strip_size_kb": 64, 00:16:46.712 "state": "online", 00:16:46.712 "raid_level": "raid5f", 00:16:46.712 "superblock": true, 00:16:46.712 "num_base_bdevs": 3, 00:16:46.712 "num_base_bdevs_discovered": 2, 00:16:46.712 "num_base_bdevs_operational": 2, 00:16:46.712 "base_bdevs_list": [ 00:16:46.712 { 00:16:46.712 "name": null, 00:16:46.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.712 "is_configured": false, 00:16:46.712 "data_offset": 0, 00:16:46.712 "data_size": 63488 00:16:46.712 }, 00:16:46.712 { 00:16:46.712 "name": "BaseBdev2", 00:16:46.712 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:46.712 "is_configured": true, 00:16:46.712 "data_offset": 2048, 00:16:46.712 "data_size": 63488 00:16:46.712 }, 00:16:46.712 { 00:16:46.712 "name": "BaseBdev3", 00:16:46.712 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:46.712 "is_configured": true, 00:16:46.712 "data_offset": 2048, 00:16:46.712 "data_size": 63488 00:16:46.712 } 00:16:46.712 ] 00:16:46.712 }' 00:16:46.712 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.712 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.970 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.970 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.229 "name": "raid_bdev1", 00:16:47.229 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:47.229 "strip_size_kb": 64, 00:16:47.229 "state": "online", 00:16:47.229 "raid_level": "raid5f", 00:16:47.229 "superblock": true, 00:16:47.229 "num_base_bdevs": 3, 00:16:47.229 "num_base_bdevs_discovered": 2, 00:16:47.229 "num_base_bdevs_operational": 2, 00:16:47.229 "base_bdevs_list": [ 00:16:47.229 { 00:16:47.229 "name": null, 00:16:47.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.229 "is_configured": false, 00:16:47.229 "data_offset": 0, 00:16:47.229 "data_size": 63488 00:16:47.229 }, 00:16:47.229 { 00:16:47.229 "name": "BaseBdev2", 00:16:47.229 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:47.229 "is_configured": true, 00:16:47.229 "data_offset": 2048, 00:16:47.229 "data_size": 63488 00:16:47.229 }, 00:16:47.229 { 00:16:47.229 "name": "BaseBdev3", 00:16:47.229 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:47.229 "is_configured": true, 00:16:47.229 "data_offset": 2048, 00:16:47.229 "data_size": 63488 00:16:47.229 } 00:16:47.229 ] 00:16:47.229 }' 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.229 [2024-11-06 12:47:35.820845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.229 [2024-11-06 12:47:35.821124] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.229 [2024-11-06 12:47:35.821150] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:47.229 request: 00:16:47.229 { 00:16:47.229 "base_bdev": "BaseBdev1", 00:16:47.229 "raid_bdev": "raid_bdev1", 00:16:47.229 "method": "bdev_raid_add_base_bdev", 00:16:47.229 "req_id": 1 00:16:47.229 } 00:16:47.229 Got JSON-RPC error response 00:16:47.229 response: 00:16:47.229 { 00:16:47.229 "code": -22, 00:16:47.229 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:47.229 } 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:47.229 12:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.606 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.607 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.607 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.607 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.607 "name": "raid_bdev1", 00:16:48.607 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:48.607 "strip_size_kb": 64, 00:16:48.607 "state": "online", 00:16:48.607 "raid_level": "raid5f", 00:16:48.607 "superblock": true, 00:16:48.607 "num_base_bdevs": 3, 00:16:48.607 "num_base_bdevs_discovered": 2, 00:16:48.607 "num_base_bdevs_operational": 2, 00:16:48.607 "base_bdevs_list": [ 00:16:48.607 { 00:16:48.607 "name": null, 00:16:48.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.607 "is_configured": false, 00:16:48.607 "data_offset": 0, 00:16:48.607 "data_size": 63488 00:16:48.607 }, 00:16:48.607 { 00:16:48.607 "name": "BaseBdev2", 00:16:48.607 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:48.607 "is_configured": true, 00:16:48.607 "data_offset": 2048, 00:16:48.607 "data_size": 63488 00:16:48.607 }, 00:16:48.607 { 00:16:48.607 "name": "BaseBdev3", 00:16:48.607 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:48.607 "is_configured": true, 00:16:48.607 "data_offset": 2048, 00:16:48.607 "data_size": 63488 00:16:48.607 } 00:16:48.607 ] 00:16:48.607 }' 00:16:48.607 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.607 12:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.865 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.865 "name": "raid_bdev1", 00:16:48.865 "uuid": "a21378e5-ba5b-4a47-b65d-a298cb0427c6", 00:16:48.865 "strip_size_kb": 64, 00:16:48.865 "state": "online", 00:16:48.865 "raid_level": "raid5f", 00:16:48.865 "superblock": true, 00:16:48.865 "num_base_bdevs": 3, 00:16:48.865 "num_base_bdevs_discovered": 2, 00:16:48.866 "num_base_bdevs_operational": 2, 00:16:48.866 "base_bdevs_list": [ 00:16:48.866 { 00:16:48.866 "name": null, 00:16:48.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.866 "is_configured": false, 00:16:48.866 "data_offset": 0, 00:16:48.866 "data_size": 63488 00:16:48.866 }, 00:16:48.866 { 00:16:48.866 "name": "BaseBdev2", 00:16:48.866 "uuid": "21996014-899a-541b-8dba-168dfe89ccf5", 00:16:48.866 "is_configured": true, 00:16:48.866 "data_offset": 2048, 00:16:48.866 "data_size": 63488 00:16:48.866 }, 00:16:48.866 { 00:16:48.866 "name": "BaseBdev3", 00:16:48.866 "uuid": "2ef19651-e637-5372-a98a-a2904144c9c9", 00:16:48.866 "is_configured": true, 00:16:48.866 "data_offset": 2048, 00:16:48.866 "data_size": 63488 00:16:48.866 } 00:16:48.866 ] 00:16:48.866 }' 00:16:48.866 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.866 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.866 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82476 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82476 ']' 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82476 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82476 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:49.124 killing process with pid 82476 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82476' 00:16:49.124 Received shutdown signal, test time was about 60.000000 seconds 00:16:49.124 00:16:49.124 Latency(us) 00:16:49.124 [2024-11-06T12:47:37.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.124 [2024-11-06T12:47:37.781Z] =================================================================================================================== 00:16:49.124 [2024-11-06T12:47:37.781Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82476 00:16:49.124 [2024-11-06 12:47:37.557691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.124 [2024-11-06 12:47:37.557897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.124 [2024-11-06 12:47:37.558024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.124 [2024-11-06 12:47:37.558058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:49.124 12:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82476 00:16:49.383 [2024-11-06 12:47:37.953375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.763 12:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:50.763 00:16:50.763 real 0m25.238s 00:16:50.763 user 0m33.373s 00:16:50.764 sys 0m2.831s 00:16:50.764 12:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:50.764 12:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.764 ************************************ 00:16:50.764 END TEST raid5f_rebuild_test_sb 00:16:50.764 ************************************ 00:16:50.764 12:47:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:50.764 12:47:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:50.764 12:47:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:50.764 12:47:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:50.764 12:47:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.764 ************************************ 00:16:50.764 START TEST raid5f_state_function_test 00:16:50.764 ************************************ 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83242 00:16:50.764 Process raid pid: 83242 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83242' 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83242 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83242 ']' 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:50.764 12:47:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.764 [2024-11-06 12:47:39.270370] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:16:50.764 [2024-11-06 12:47:39.270564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.023 [2024-11-06 12:47:39.465078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.023 [2024-11-06 12:47:39.641559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.282 [2024-11-06 12:47:39.875899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.282 [2024-11-06 12:47:39.875981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.852 [2024-11-06 12:47:40.324283] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.852 [2024-11-06 12:47:40.324384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.852 [2024-11-06 12:47:40.324403] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.852 [2024-11-06 12:47:40.324419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.852 [2024-11-06 12:47:40.324429] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.852 [2024-11-06 12:47:40.324444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.852 [2024-11-06 12:47:40.324454] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:51.852 [2024-11-06 12:47:40.324470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.852 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.853 "name": "Existed_Raid", 00:16:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.853 "strip_size_kb": 64, 00:16:51.853 "state": "configuring", 00:16:51.853 "raid_level": "raid5f", 00:16:51.853 "superblock": false, 00:16:51.853 "num_base_bdevs": 4, 00:16:51.853 "num_base_bdevs_discovered": 0, 00:16:51.853 "num_base_bdevs_operational": 4, 00:16:51.853 "base_bdevs_list": [ 00:16:51.853 { 00:16:51.853 "name": "BaseBdev1", 00:16:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.853 "is_configured": false, 00:16:51.853 "data_offset": 0, 00:16:51.853 "data_size": 0 00:16:51.853 }, 00:16:51.853 { 00:16:51.853 "name": "BaseBdev2", 00:16:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.853 "is_configured": false, 00:16:51.853 "data_offset": 0, 00:16:51.853 "data_size": 0 00:16:51.853 }, 00:16:51.853 { 00:16:51.853 "name": "BaseBdev3", 00:16:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.853 "is_configured": false, 00:16:51.853 "data_offset": 0, 00:16:51.853 "data_size": 0 00:16:51.853 }, 00:16:51.853 { 00:16:51.853 "name": "BaseBdev4", 00:16:51.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.853 "is_configured": false, 00:16:51.853 "data_offset": 0, 00:16:51.853 "data_size": 0 00:16:51.853 } 00:16:51.853 ] 00:16:51.853 }' 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.853 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 [2024-11-06 12:47:40.852401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.420 [2024-11-06 12:47:40.852475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 [2024-11-06 12:47:40.860339] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.420 [2024-11-06 12:47:40.860415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.420 [2024-11-06 12:47:40.860432] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.420 [2024-11-06 12:47:40.860449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.420 [2024-11-06 12:47:40.860459] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.420 [2024-11-06 12:47:40.860474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.420 [2024-11-06 12:47:40.860483] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:52.420 [2024-11-06 12:47:40.860497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 [2024-11-06 12:47:40.910379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.420 BaseBdev1 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.420 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 [ 00:16:52.420 { 00:16:52.420 "name": "BaseBdev1", 00:16:52.420 "aliases": [ 00:16:52.420 "904e8c6d-7b46-452b-8602-8ff15f13c794" 00:16:52.420 ], 00:16:52.420 "product_name": "Malloc disk", 00:16:52.420 "block_size": 512, 00:16:52.420 "num_blocks": 65536, 00:16:52.420 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:52.420 "assigned_rate_limits": { 00:16:52.420 "rw_ios_per_sec": 0, 00:16:52.420 "rw_mbytes_per_sec": 0, 00:16:52.420 "r_mbytes_per_sec": 0, 00:16:52.420 "w_mbytes_per_sec": 0 00:16:52.420 }, 00:16:52.420 "claimed": true, 00:16:52.420 "claim_type": "exclusive_write", 00:16:52.420 "zoned": false, 00:16:52.420 "supported_io_types": { 00:16:52.420 "read": true, 00:16:52.420 "write": true, 00:16:52.420 "unmap": true, 00:16:52.420 "flush": true, 00:16:52.420 "reset": true, 00:16:52.420 "nvme_admin": false, 00:16:52.420 "nvme_io": false, 00:16:52.420 "nvme_io_md": false, 00:16:52.420 "write_zeroes": true, 00:16:52.420 "zcopy": true, 00:16:52.420 "get_zone_info": false, 00:16:52.420 "zone_management": false, 00:16:52.420 "zone_append": false, 00:16:52.420 "compare": false, 00:16:52.420 "compare_and_write": false, 00:16:52.420 "abort": true, 00:16:52.420 "seek_hole": false, 00:16:52.420 "seek_data": false, 00:16:52.420 "copy": true, 00:16:52.420 "nvme_iov_md": false 00:16:52.420 }, 00:16:52.420 "memory_domains": [ 00:16:52.420 { 00:16:52.420 "dma_device_id": "system", 00:16:52.420 "dma_device_type": 1 00:16:52.420 }, 00:16:52.420 { 00:16:52.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.420 "dma_device_type": 2 00:16:52.420 } 00:16:52.421 ], 00:16:52.421 "driver_specific": {} 00:16:52.421 } 00:16:52.421 ] 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 12:47:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.421 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.421 "name": "Existed_Raid", 00:16:52.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.421 "strip_size_kb": 64, 00:16:52.421 "state": "configuring", 00:16:52.421 "raid_level": "raid5f", 00:16:52.421 "superblock": false, 00:16:52.421 "num_base_bdevs": 4, 00:16:52.421 "num_base_bdevs_discovered": 1, 00:16:52.421 "num_base_bdevs_operational": 4, 00:16:52.421 "base_bdevs_list": [ 00:16:52.421 { 00:16:52.421 "name": "BaseBdev1", 00:16:52.421 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:52.421 "is_configured": true, 00:16:52.421 "data_offset": 0, 00:16:52.421 "data_size": 65536 00:16:52.421 }, 00:16:52.421 { 00:16:52.421 "name": "BaseBdev2", 00:16:52.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.421 "is_configured": false, 00:16:52.421 "data_offset": 0, 00:16:52.421 "data_size": 0 00:16:52.421 }, 00:16:52.421 { 00:16:52.421 "name": "BaseBdev3", 00:16:52.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.421 "is_configured": false, 00:16:52.421 "data_offset": 0, 00:16:52.421 "data_size": 0 00:16:52.421 }, 00:16:52.421 { 00:16:52.421 "name": "BaseBdev4", 00:16:52.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.421 "is_configured": false, 00:16:52.421 "data_offset": 0, 00:16:52.421 "data_size": 0 00:16:52.421 } 00:16:52.421 ] 00:16:52.421 }' 00:16:52.421 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.421 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.988 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.988 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.988 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.988 [2024-11-06 12:47:41.498564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.988 [2024-11-06 12:47:41.498676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.989 [2024-11-06 12:47:41.506650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.989 [2024-11-06 12:47:41.509368] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.989 [2024-11-06 12:47:41.509431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.989 [2024-11-06 12:47:41.509450] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.989 [2024-11-06 12:47:41.509468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.989 [2024-11-06 12:47:41.509478] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:52.989 [2024-11-06 12:47:41.509493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.989 "name": "Existed_Raid", 00:16:52.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.989 "strip_size_kb": 64, 00:16:52.989 "state": "configuring", 00:16:52.989 "raid_level": "raid5f", 00:16:52.989 "superblock": false, 00:16:52.989 "num_base_bdevs": 4, 00:16:52.989 "num_base_bdevs_discovered": 1, 00:16:52.989 "num_base_bdevs_operational": 4, 00:16:52.989 "base_bdevs_list": [ 00:16:52.989 { 00:16:52.989 "name": "BaseBdev1", 00:16:52.989 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:52.989 "is_configured": true, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 65536 00:16:52.989 }, 00:16:52.989 { 00:16:52.989 "name": "BaseBdev2", 00:16:52.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.989 "is_configured": false, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 0 00:16:52.989 }, 00:16:52.989 { 00:16:52.989 "name": "BaseBdev3", 00:16:52.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.989 "is_configured": false, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 0 00:16:52.989 }, 00:16:52.989 { 00:16:52.989 "name": "BaseBdev4", 00:16:52.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.989 "is_configured": false, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 0 00:16:52.989 } 00:16:52.989 ] 00:16:52.989 }' 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.989 12:47:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.555 [2024-11-06 12:47:42.081558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.555 BaseBdev2 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.555 [ 00:16:53.555 { 00:16:53.555 "name": "BaseBdev2", 00:16:53.555 "aliases": [ 00:16:53.555 "e23de4bd-621d-403e-b553-cc486392d945" 00:16:53.555 ], 00:16:53.555 "product_name": "Malloc disk", 00:16:53.555 "block_size": 512, 00:16:53.555 "num_blocks": 65536, 00:16:53.555 "uuid": "e23de4bd-621d-403e-b553-cc486392d945", 00:16:53.555 "assigned_rate_limits": { 00:16:53.555 "rw_ios_per_sec": 0, 00:16:53.555 "rw_mbytes_per_sec": 0, 00:16:53.555 "r_mbytes_per_sec": 0, 00:16:53.555 "w_mbytes_per_sec": 0 00:16:53.555 }, 00:16:53.555 "claimed": true, 00:16:53.555 "claim_type": "exclusive_write", 00:16:53.555 "zoned": false, 00:16:53.555 "supported_io_types": { 00:16:53.555 "read": true, 00:16:53.555 "write": true, 00:16:53.555 "unmap": true, 00:16:53.555 "flush": true, 00:16:53.555 "reset": true, 00:16:53.555 "nvme_admin": false, 00:16:53.555 "nvme_io": false, 00:16:53.555 "nvme_io_md": false, 00:16:53.555 "write_zeroes": true, 00:16:53.555 "zcopy": true, 00:16:53.555 "get_zone_info": false, 00:16:53.555 "zone_management": false, 00:16:53.555 "zone_append": false, 00:16:53.555 "compare": false, 00:16:53.555 "compare_and_write": false, 00:16:53.555 "abort": true, 00:16:53.555 "seek_hole": false, 00:16:53.555 "seek_data": false, 00:16:53.555 "copy": true, 00:16:53.555 "nvme_iov_md": false 00:16:53.555 }, 00:16:53.555 "memory_domains": [ 00:16:53.555 { 00:16:53.555 "dma_device_id": "system", 00:16:53.555 "dma_device_type": 1 00:16:53.555 }, 00:16:53.555 { 00:16:53.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.555 "dma_device_type": 2 00:16:53.555 } 00:16:53.555 ], 00:16:53.555 "driver_specific": {} 00:16:53.555 } 00:16:53.555 ] 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.555 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.556 "name": "Existed_Raid", 00:16:53.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.556 "strip_size_kb": 64, 00:16:53.556 "state": "configuring", 00:16:53.556 "raid_level": "raid5f", 00:16:53.556 "superblock": false, 00:16:53.556 "num_base_bdevs": 4, 00:16:53.556 "num_base_bdevs_discovered": 2, 00:16:53.556 "num_base_bdevs_operational": 4, 00:16:53.556 "base_bdevs_list": [ 00:16:53.556 { 00:16:53.556 "name": "BaseBdev1", 00:16:53.556 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:53.556 "is_configured": true, 00:16:53.556 "data_offset": 0, 00:16:53.556 "data_size": 65536 00:16:53.556 }, 00:16:53.556 { 00:16:53.556 "name": "BaseBdev2", 00:16:53.556 "uuid": "e23de4bd-621d-403e-b553-cc486392d945", 00:16:53.556 "is_configured": true, 00:16:53.556 "data_offset": 0, 00:16:53.556 "data_size": 65536 00:16:53.556 }, 00:16:53.556 { 00:16:53.556 "name": "BaseBdev3", 00:16:53.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.556 "is_configured": false, 00:16:53.556 "data_offset": 0, 00:16:53.556 "data_size": 0 00:16:53.556 }, 00:16:53.556 { 00:16:53.556 "name": "BaseBdev4", 00:16:53.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.556 "is_configured": false, 00:16:53.556 "data_offset": 0, 00:16:53.556 "data_size": 0 00:16:53.556 } 00:16:53.556 ] 00:16:53.556 }' 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.556 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.123 [2024-11-06 12:47:42.701136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.123 BaseBdev3 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.123 [ 00:16:54.123 { 00:16:54.123 "name": "BaseBdev3", 00:16:54.123 "aliases": [ 00:16:54.123 "1f8cea64-ab1a-4998-ad87-901e504e33d0" 00:16:54.123 ], 00:16:54.123 "product_name": "Malloc disk", 00:16:54.123 "block_size": 512, 00:16:54.123 "num_blocks": 65536, 00:16:54.123 "uuid": "1f8cea64-ab1a-4998-ad87-901e504e33d0", 00:16:54.123 "assigned_rate_limits": { 00:16:54.123 "rw_ios_per_sec": 0, 00:16:54.123 "rw_mbytes_per_sec": 0, 00:16:54.123 "r_mbytes_per_sec": 0, 00:16:54.123 "w_mbytes_per_sec": 0 00:16:54.123 }, 00:16:54.123 "claimed": true, 00:16:54.123 "claim_type": "exclusive_write", 00:16:54.123 "zoned": false, 00:16:54.123 "supported_io_types": { 00:16:54.123 "read": true, 00:16:54.123 "write": true, 00:16:54.123 "unmap": true, 00:16:54.123 "flush": true, 00:16:54.123 "reset": true, 00:16:54.123 "nvme_admin": false, 00:16:54.123 "nvme_io": false, 00:16:54.123 "nvme_io_md": false, 00:16:54.123 "write_zeroes": true, 00:16:54.123 "zcopy": true, 00:16:54.123 "get_zone_info": false, 00:16:54.123 "zone_management": false, 00:16:54.123 "zone_append": false, 00:16:54.123 "compare": false, 00:16:54.123 "compare_and_write": false, 00:16:54.123 "abort": true, 00:16:54.123 "seek_hole": false, 00:16:54.123 "seek_data": false, 00:16:54.123 "copy": true, 00:16:54.123 "nvme_iov_md": false 00:16:54.123 }, 00:16:54.123 "memory_domains": [ 00:16:54.123 { 00:16:54.123 "dma_device_id": "system", 00:16:54.123 "dma_device_type": 1 00:16:54.123 }, 00:16:54.123 { 00:16:54.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.123 "dma_device_type": 2 00:16:54.123 } 00:16:54.123 ], 00:16:54.123 "driver_specific": {} 00:16:54.123 } 00:16:54.123 ] 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.123 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.380 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.380 "name": "Existed_Raid", 00:16:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.380 "strip_size_kb": 64, 00:16:54.380 "state": "configuring", 00:16:54.380 "raid_level": "raid5f", 00:16:54.380 "superblock": false, 00:16:54.380 "num_base_bdevs": 4, 00:16:54.380 "num_base_bdevs_discovered": 3, 00:16:54.380 "num_base_bdevs_operational": 4, 00:16:54.380 "base_bdevs_list": [ 00:16:54.380 { 00:16:54.380 "name": "BaseBdev1", 00:16:54.380 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:54.380 "is_configured": true, 00:16:54.380 "data_offset": 0, 00:16:54.380 "data_size": 65536 00:16:54.380 }, 00:16:54.380 { 00:16:54.380 "name": "BaseBdev2", 00:16:54.380 "uuid": "e23de4bd-621d-403e-b553-cc486392d945", 00:16:54.380 "is_configured": true, 00:16:54.380 "data_offset": 0, 00:16:54.380 "data_size": 65536 00:16:54.380 }, 00:16:54.380 { 00:16:54.380 "name": "BaseBdev3", 00:16:54.380 "uuid": "1f8cea64-ab1a-4998-ad87-901e504e33d0", 00:16:54.380 "is_configured": true, 00:16:54.380 "data_offset": 0, 00:16:54.380 "data_size": 65536 00:16:54.380 }, 00:16:54.380 { 00:16:54.380 "name": "BaseBdev4", 00:16:54.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.380 "is_configured": false, 00:16:54.380 "data_offset": 0, 00:16:54.380 "data_size": 0 00:16:54.380 } 00:16:54.380 ] 00:16:54.380 }' 00:16:54.380 12:47:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.380 12:47:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.946 [2024-11-06 12:47:43.363241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.946 [2024-11-06 12:47:43.363372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:54.946 [2024-11-06 12:47:43.363390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:54.946 [2024-11-06 12:47:43.363782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:54.946 [2024-11-06 12:47:43.370856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:54.946 [2024-11-06 12:47:43.370899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:54.946 BaseBdev4 00:16:54.946 [2024-11-06 12:47:43.371311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.946 [ 00:16:54.946 { 00:16:54.946 "name": "BaseBdev4", 00:16:54.946 "aliases": [ 00:16:54.946 "2de189c2-b4d5-4321-8967-eee6474c465b" 00:16:54.946 ], 00:16:54.946 "product_name": "Malloc disk", 00:16:54.946 "block_size": 512, 00:16:54.946 "num_blocks": 65536, 00:16:54.946 "uuid": "2de189c2-b4d5-4321-8967-eee6474c465b", 00:16:54.946 "assigned_rate_limits": { 00:16:54.946 "rw_ios_per_sec": 0, 00:16:54.946 "rw_mbytes_per_sec": 0, 00:16:54.946 "r_mbytes_per_sec": 0, 00:16:54.946 "w_mbytes_per_sec": 0 00:16:54.946 }, 00:16:54.946 "claimed": true, 00:16:54.946 "claim_type": "exclusive_write", 00:16:54.946 "zoned": false, 00:16:54.946 "supported_io_types": { 00:16:54.946 "read": true, 00:16:54.946 "write": true, 00:16:54.946 "unmap": true, 00:16:54.946 "flush": true, 00:16:54.946 "reset": true, 00:16:54.946 "nvme_admin": false, 00:16:54.946 "nvme_io": false, 00:16:54.946 "nvme_io_md": false, 00:16:54.946 "write_zeroes": true, 00:16:54.946 "zcopy": true, 00:16:54.946 "get_zone_info": false, 00:16:54.946 "zone_management": false, 00:16:54.946 "zone_append": false, 00:16:54.946 "compare": false, 00:16:54.946 "compare_and_write": false, 00:16:54.946 "abort": true, 00:16:54.946 "seek_hole": false, 00:16:54.946 "seek_data": false, 00:16:54.946 "copy": true, 00:16:54.946 "nvme_iov_md": false 00:16:54.946 }, 00:16:54.946 "memory_domains": [ 00:16:54.946 { 00:16:54.946 "dma_device_id": "system", 00:16:54.946 "dma_device_type": 1 00:16:54.946 }, 00:16:54.946 { 00:16:54.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.946 "dma_device_type": 2 00:16:54.946 } 00:16:54.946 ], 00:16:54.946 "driver_specific": {} 00:16:54.946 } 00:16:54.946 ] 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.946 "name": "Existed_Raid", 00:16:54.946 "uuid": "9b2c90a0-5605-4ed4-aa62-ab1bcb73d6c7", 00:16:54.946 "strip_size_kb": 64, 00:16:54.946 "state": "online", 00:16:54.946 "raid_level": "raid5f", 00:16:54.946 "superblock": false, 00:16:54.946 "num_base_bdevs": 4, 00:16:54.946 "num_base_bdevs_discovered": 4, 00:16:54.946 "num_base_bdevs_operational": 4, 00:16:54.946 "base_bdevs_list": [ 00:16:54.946 { 00:16:54.946 "name": "BaseBdev1", 00:16:54.946 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:54.946 "is_configured": true, 00:16:54.946 "data_offset": 0, 00:16:54.946 "data_size": 65536 00:16:54.946 }, 00:16:54.946 { 00:16:54.946 "name": "BaseBdev2", 00:16:54.946 "uuid": "e23de4bd-621d-403e-b553-cc486392d945", 00:16:54.946 "is_configured": true, 00:16:54.946 "data_offset": 0, 00:16:54.946 "data_size": 65536 00:16:54.946 }, 00:16:54.946 { 00:16:54.946 "name": "BaseBdev3", 00:16:54.946 "uuid": "1f8cea64-ab1a-4998-ad87-901e504e33d0", 00:16:54.946 "is_configured": true, 00:16:54.946 "data_offset": 0, 00:16:54.946 "data_size": 65536 00:16:54.946 }, 00:16:54.946 { 00:16:54.946 "name": "BaseBdev4", 00:16:54.946 "uuid": "2de189c2-b4d5-4321-8967-eee6474c465b", 00:16:54.946 "is_configured": true, 00:16:54.946 "data_offset": 0, 00:16:54.946 "data_size": 65536 00:16:54.946 } 00:16:54.946 ] 00:16:54.946 }' 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.946 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:55.514 [2024-11-06 12:47:43.942274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:55.514 "name": "Existed_Raid", 00:16:55.514 "aliases": [ 00:16:55.514 "9b2c90a0-5605-4ed4-aa62-ab1bcb73d6c7" 00:16:55.514 ], 00:16:55.514 "product_name": "Raid Volume", 00:16:55.514 "block_size": 512, 00:16:55.514 "num_blocks": 196608, 00:16:55.514 "uuid": "9b2c90a0-5605-4ed4-aa62-ab1bcb73d6c7", 00:16:55.514 "assigned_rate_limits": { 00:16:55.514 "rw_ios_per_sec": 0, 00:16:55.514 "rw_mbytes_per_sec": 0, 00:16:55.514 "r_mbytes_per_sec": 0, 00:16:55.514 "w_mbytes_per_sec": 0 00:16:55.514 }, 00:16:55.514 "claimed": false, 00:16:55.514 "zoned": false, 00:16:55.514 "supported_io_types": { 00:16:55.514 "read": true, 00:16:55.514 "write": true, 00:16:55.514 "unmap": false, 00:16:55.514 "flush": false, 00:16:55.514 "reset": true, 00:16:55.514 "nvme_admin": false, 00:16:55.514 "nvme_io": false, 00:16:55.514 "nvme_io_md": false, 00:16:55.514 "write_zeroes": true, 00:16:55.514 "zcopy": false, 00:16:55.514 "get_zone_info": false, 00:16:55.514 "zone_management": false, 00:16:55.514 "zone_append": false, 00:16:55.514 "compare": false, 00:16:55.514 "compare_and_write": false, 00:16:55.514 "abort": false, 00:16:55.514 "seek_hole": false, 00:16:55.514 "seek_data": false, 00:16:55.514 "copy": false, 00:16:55.514 "nvme_iov_md": false 00:16:55.514 }, 00:16:55.514 "driver_specific": { 00:16:55.514 "raid": { 00:16:55.514 "uuid": "9b2c90a0-5605-4ed4-aa62-ab1bcb73d6c7", 00:16:55.514 "strip_size_kb": 64, 00:16:55.514 "state": "online", 00:16:55.514 "raid_level": "raid5f", 00:16:55.514 "superblock": false, 00:16:55.514 "num_base_bdevs": 4, 00:16:55.514 "num_base_bdevs_discovered": 4, 00:16:55.514 "num_base_bdevs_operational": 4, 00:16:55.514 "base_bdevs_list": [ 00:16:55.514 { 00:16:55.514 "name": "BaseBdev1", 00:16:55.514 "uuid": "904e8c6d-7b46-452b-8602-8ff15f13c794", 00:16:55.514 "is_configured": true, 00:16:55.514 "data_offset": 0, 00:16:55.514 "data_size": 65536 00:16:55.514 }, 00:16:55.514 { 00:16:55.514 "name": "BaseBdev2", 00:16:55.514 "uuid": "e23de4bd-621d-403e-b553-cc486392d945", 00:16:55.514 "is_configured": true, 00:16:55.514 "data_offset": 0, 00:16:55.514 "data_size": 65536 00:16:55.514 }, 00:16:55.514 { 00:16:55.514 "name": "BaseBdev3", 00:16:55.514 "uuid": "1f8cea64-ab1a-4998-ad87-901e504e33d0", 00:16:55.514 "is_configured": true, 00:16:55.514 "data_offset": 0, 00:16:55.514 "data_size": 65536 00:16:55.514 }, 00:16:55.514 { 00:16:55.514 "name": "BaseBdev4", 00:16:55.514 "uuid": "2de189c2-b4d5-4321-8967-eee6474c465b", 00:16:55.514 "is_configured": true, 00:16:55.514 "data_offset": 0, 00:16:55.514 "data_size": 65536 00:16:55.514 } 00:16:55.514 ] 00:16:55.514 } 00:16:55.514 } 00:16:55.514 }' 00:16:55.514 12:47:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.514 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:55.514 BaseBdev2 00:16:55.514 BaseBdev3 00:16:55.514 BaseBdev4' 00:16:55.514 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.514 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.515 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.773 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.774 [2024-11-06 12:47:44.266133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.774 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.032 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.032 "name": "Existed_Raid", 00:16:56.032 "uuid": "9b2c90a0-5605-4ed4-aa62-ab1bcb73d6c7", 00:16:56.032 "strip_size_kb": 64, 00:16:56.032 "state": "online", 00:16:56.032 "raid_level": "raid5f", 00:16:56.032 "superblock": false, 00:16:56.032 "num_base_bdevs": 4, 00:16:56.032 "num_base_bdevs_discovered": 3, 00:16:56.032 "num_base_bdevs_operational": 3, 00:16:56.032 "base_bdevs_list": [ 00:16:56.032 { 00:16:56.032 "name": null, 00:16:56.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.032 "is_configured": false, 00:16:56.032 "data_offset": 0, 00:16:56.032 "data_size": 65536 00:16:56.032 }, 00:16:56.032 { 00:16:56.032 "name": "BaseBdev2", 00:16:56.032 "uuid": "e23de4bd-621d-403e-b553-cc486392d945", 00:16:56.032 "is_configured": true, 00:16:56.032 "data_offset": 0, 00:16:56.032 "data_size": 65536 00:16:56.032 }, 00:16:56.032 { 00:16:56.032 "name": "BaseBdev3", 00:16:56.032 "uuid": "1f8cea64-ab1a-4998-ad87-901e504e33d0", 00:16:56.032 "is_configured": true, 00:16:56.032 "data_offset": 0, 00:16:56.032 "data_size": 65536 00:16:56.032 }, 00:16:56.032 { 00:16:56.032 "name": "BaseBdev4", 00:16:56.032 "uuid": "2de189c2-b4d5-4321-8967-eee6474c465b", 00:16:56.032 "is_configured": true, 00:16:56.032 "data_offset": 0, 00:16:56.032 "data_size": 65536 00:16:56.032 } 00:16:56.032 ] 00:16:56.032 }' 00:16:56.032 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.032 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.290 12:47:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.290 [2024-11-06 12:47:44.908992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.290 [2024-11-06 12:47:44.909309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.547 [2024-11-06 12:47:45.006329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.547 [2024-11-06 12:47:45.066464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.547 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.806 [2024-11-06 12:47:45.240417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:56.806 [2024-11-06 12:47:45.240560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:56.806 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.807 BaseBdev2 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.807 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.066 [ 00:16:57.066 { 00:16:57.066 "name": "BaseBdev2", 00:16:57.066 "aliases": [ 00:16:57.066 "17de904e-4f07-4fc8-990c-a6ec4bb19790" 00:16:57.066 ], 00:16:57.066 "product_name": "Malloc disk", 00:16:57.066 "block_size": 512, 00:16:57.066 "num_blocks": 65536, 00:16:57.066 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:16:57.066 "assigned_rate_limits": { 00:16:57.066 "rw_ios_per_sec": 0, 00:16:57.066 "rw_mbytes_per_sec": 0, 00:16:57.066 "r_mbytes_per_sec": 0, 00:16:57.066 "w_mbytes_per_sec": 0 00:16:57.066 }, 00:16:57.066 "claimed": false, 00:16:57.066 "zoned": false, 00:16:57.066 "supported_io_types": { 00:16:57.066 "read": true, 00:16:57.066 "write": true, 00:16:57.066 "unmap": true, 00:16:57.066 "flush": true, 00:16:57.066 "reset": true, 00:16:57.066 "nvme_admin": false, 00:16:57.066 "nvme_io": false, 00:16:57.066 "nvme_io_md": false, 00:16:57.066 "write_zeroes": true, 00:16:57.066 "zcopy": true, 00:16:57.066 "get_zone_info": false, 00:16:57.066 "zone_management": false, 00:16:57.066 "zone_append": false, 00:16:57.066 "compare": false, 00:16:57.066 "compare_and_write": false, 00:16:57.066 "abort": true, 00:16:57.066 "seek_hole": false, 00:16:57.066 "seek_data": false, 00:16:57.066 "copy": true, 00:16:57.066 "nvme_iov_md": false 00:16:57.066 }, 00:16:57.066 "memory_domains": [ 00:16:57.066 { 00:16:57.066 "dma_device_id": "system", 00:16:57.066 "dma_device_type": 1 00:16:57.066 }, 00:16:57.066 { 00:16:57.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.066 "dma_device_type": 2 00:16:57.066 } 00:16:57.066 ], 00:16:57.066 "driver_specific": {} 00:16:57.066 } 00:16:57.066 ] 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.066 BaseBdev3 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.066 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.066 [ 00:16:57.066 { 00:16:57.066 "name": "BaseBdev3", 00:16:57.066 "aliases": [ 00:16:57.066 "849a12d7-c5e4-47f8-b4fb-24e48e347fe4" 00:16:57.066 ], 00:16:57.066 "product_name": "Malloc disk", 00:16:57.066 "block_size": 512, 00:16:57.066 "num_blocks": 65536, 00:16:57.066 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:16:57.066 "assigned_rate_limits": { 00:16:57.066 "rw_ios_per_sec": 0, 00:16:57.066 "rw_mbytes_per_sec": 0, 00:16:57.066 "r_mbytes_per_sec": 0, 00:16:57.066 "w_mbytes_per_sec": 0 00:16:57.066 }, 00:16:57.066 "claimed": false, 00:16:57.067 "zoned": false, 00:16:57.067 "supported_io_types": { 00:16:57.067 "read": true, 00:16:57.067 "write": true, 00:16:57.067 "unmap": true, 00:16:57.067 "flush": true, 00:16:57.067 "reset": true, 00:16:57.067 "nvme_admin": false, 00:16:57.067 "nvme_io": false, 00:16:57.067 "nvme_io_md": false, 00:16:57.067 "write_zeroes": true, 00:16:57.067 "zcopy": true, 00:16:57.067 "get_zone_info": false, 00:16:57.067 "zone_management": false, 00:16:57.067 "zone_append": false, 00:16:57.067 "compare": false, 00:16:57.067 "compare_and_write": false, 00:16:57.067 "abort": true, 00:16:57.067 "seek_hole": false, 00:16:57.067 "seek_data": false, 00:16:57.067 "copy": true, 00:16:57.067 "nvme_iov_md": false 00:16:57.067 }, 00:16:57.067 "memory_domains": [ 00:16:57.067 { 00:16:57.067 "dma_device_id": "system", 00:16:57.067 "dma_device_type": 1 00:16:57.067 }, 00:16:57.067 { 00:16:57.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.067 "dma_device_type": 2 00:16:57.067 } 00:16:57.067 ], 00:16:57.067 "driver_specific": {} 00:16:57.067 } 00:16:57.067 ] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.067 BaseBdev4 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.067 [ 00:16:57.067 { 00:16:57.067 "name": "BaseBdev4", 00:16:57.067 "aliases": [ 00:16:57.067 "4c1f09d1-438c-40af-b499-049a5c670067" 00:16:57.067 ], 00:16:57.067 "product_name": "Malloc disk", 00:16:57.067 "block_size": 512, 00:16:57.067 "num_blocks": 65536, 00:16:57.067 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:16:57.067 "assigned_rate_limits": { 00:16:57.067 "rw_ios_per_sec": 0, 00:16:57.067 "rw_mbytes_per_sec": 0, 00:16:57.067 "r_mbytes_per_sec": 0, 00:16:57.067 "w_mbytes_per_sec": 0 00:16:57.067 }, 00:16:57.067 "claimed": false, 00:16:57.067 "zoned": false, 00:16:57.067 "supported_io_types": { 00:16:57.067 "read": true, 00:16:57.067 "write": true, 00:16:57.067 "unmap": true, 00:16:57.067 "flush": true, 00:16:57.067 "reset": true, 00:16:57.067 "nvme_admin": false, 00:16:57.067 "nvme_io": false, 00:16:57.067 "nvme_io_md": false, 00:16:57.067 "write_zeroes": true, 00:16:57.067 "zcopy": true, 00:16:57.067 "get_zone_info": false, 00:16:57.067 "zone_management": false, 00:16:57.067 "zone_append": false, 00:16:57.067 "compare": false, 00:16:57.067 "compare_and_write": false, 00:16:57.067 "abort": true, 00:16:57.067 "seek_hole": false, 00:16:57.067 "seek_data": false, 00:16:57.067 "copy": true, 00:16:57.067 "nvme_iov_md": false 00:16:57.067 }, 00:16:57.067 "memory_domains": [ 00:16:57.067 { 00:16:57.067 "dma_device_id": "system", 00:16:57.067 "dma_device_type": 1 00:16:57.067 }, 00:16:57.067 { 00:16:57.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.067 "dma_device_type": 2 00:16:57.067 } 00:16:57.067 ], 00:16:57.067 "driver_specific": {} 00:16:57.067 } 00:16:57.067 ] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.067 [2024-11-06 12:47:45.667950] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.067 [2024-11-06 12:47:45.668033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.067 [2024-11-06 12:47:45.668068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.067 [2024-11-06 12:47:45.670944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.067 [2024-11-06 12:47:45.671033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.067 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.326 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.326 "name": "Existed_Raid", 00:16:57.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.326 "strip_size_kb": 64, 00:16:57.326 "state": "configuring", 00:16:57.326 "raid_level": "raid5f", 00:16:57.326 "superblock": false, 00:16:57.326 "num_base_bdevs": 4, 00:16:57.326 "num_base_bdevs_discovered": 3, 00:16:57.326 "num_base_bdevs_operational": 4, 00:16:57.326 "base_bdevs_list": [ 00:16:57.326 { 00:16:57.326 "name": "BaseBdev1", 00:16:57.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.326 "is_configured": false, 00:16:57.326 "data_offset": 0, 00:16:57.326 "data_size": 0 00:16:57.326 }, 00:16:57.326 { 00:16:57.326 "name": "BaseBdev2", 00:16:57.326 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:16:57.326 "is_configured": true, 00:16:57.326 "data_offset": 0, 00:16:57.326 "data_size": 65536 00:16:57.326 }, 00:16:57.326 { 00:16:57.326 "name": "BaseBdev3", 00:16:57.326 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:16:57.326 "is_configured": true, 00:16:57.326 "data_offset": 0, 00:16:57.326 "data_size": 65536 00:16:57.326 }, 00:16:57.326 { 00:16:57.326 "name": "BaseBdev4", 00:16:57.326 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:16:57.326 "is_configured": true, 00:16:57.326 "data_offset": 0, 00:16:57.326 "data_size": 65536 00:16:57.326 } 00:16:57.326 ] 00:16:57.326 }' 00:16:57.326 12:47:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.326 12:47:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.584 [2024-11-06 12:47:46.216187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.584 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.842 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.842 "name": "Existed_Raid", 00:16:57.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.842 "strip_size_kb": 64, 00:16:57.842 "state": "configuring", 00:16:57.842 "raid_level": "raid5f", 00:16:57.842 "superblock": false, 00:16:57.842 "num_base_bdevs": 4, 00:16:57.842 "num_base_bdevs_discovered": 2, 00:16:57.842 "num_base_bdevs_operational": 4, 00:16:57.842 "base_bdevs_list": [ 00:16:57.842 { 00:16:57.842 "name": "BaseBdev1", 00:16:57.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.842 "is_configured": false, 00:16:57.842 "data_offset": 0, 00:16:57.842 "data_size": 0 00:16:57.842 }, 00:16:57.842 { 00:16:57.842 "name": null, 00:16:57.842 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:16:57.842 "is_configured": false, 00:16:57.842 "data_offset": 0, 00:16:57.842 "data_size": 65536 00:16:57.843 }, 00:16:57.843 { 00:16:57.843 "name": "BaseBdev3", 00:16:57.843 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:16:57.843 "is_configured": true, 00:16:57.843 "data_offset": 0, 00:16:57.843 "data_size": 65536 00:16:57.843 }, 00:16:57.843 { 00:16:57.843 "name": "BaseBdev4", 00:16:57.843 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:16:57.843 "is_configured": true, 00:16:57.843 "data_offset": 0, 00:16:57.843 "data_size": 65536 00:16:57.843 } 00:16:57.843 ] 00:16:57.843 }' 00:16:57.843 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.843 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.409 [2024-11-06 12:47:46.874859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.409 BaseBdev1 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.409 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.410 [ 00:16:58.410 { 00:16:58.410 "name": "BaseBdev1", 00:16:58.410 "aliases": [ 00:16:58.410 "a742482f-afe2-46da-b6da-8bd2b5c43b63" 00:16:58.410 ], 00:16:58.410 "product_name": "Malloc disk", 00:16:58.410 "block_size": 512, 00:16:58.410 "num_blocks": 65536, 00:16:58.410 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:16:58.410 "assigned_rate_limits": { 00:16:58.410 "rw_ios_per_sec": 0, 00:16:58.410 "rw_mbytes_per_sec": 0, 00:16:58.410 "r_mbytes_per_sec": 0, 00:16:58.410 "w_mbytes_per_sec": 0 00:16:58.410 }, 00:16:58.410 "claimed": true, 00:16:58.410 "claim_type": "exclusive_write", 00:16:58.410 "zoned": false, 00:16:58.410 "supported_io_types": { 00:16:58.410 "read": true, 00:16:58.410 "write": true, 00:16:58.410 "unmap": true, 00:16:58.410 "flush": true, 00:16:58.410 "reset": true, 00:16:58.410 "nvme_admin": false, 00:16:58.410 "nvme_io": false, 00:16:58.410 "nvme_io_md": false, 00:16:58.410 "write_zeroes": true, 00:16:58.410 "zcopy": true, 00:16:58.410 "get_zone_info": false, 00:16:58.410 "zone_management": false, 00:16:58.410 "zone_append": false, 00:16:58.410 "compare": false, 00:16:58.410 "compare_and_write": false, 00:16:58.410 "abort": true, 00:16:58.410 "seek_hole": false, 00:16:58.410 "seek_data": false, 00:16:58.410 "copy": true, 00:16:58.410 "nvme_iov_md": false 00:16:58.410 }, 00:16:58.410 "memory_domains": [ 00:16:58.410 { 00:16:58.410 "dma_device_id": "system", 00:16:58.410 "dma_device_type": 1 00:16:58.410 }, 00:16:58.410 { 00:16:58.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.410 "dma_device_type": 2 00:16:58.410 } 00:16:58.410 ], 00:16:58.410 "driver_specific": {} 00:16:58.410 } 00:16:58.410 ] 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.410 "name": "Existed_Raid", 00:16:58.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.410 "strip_size_kb": 64, 00:16:58.410 "state": "configuring", 00:16:58.410 "raid_level": "raid5f", 00:16:58.410 "superblock": false, 00:16:58.410 "num_base_bdevs": 4, 00:16:58.410 "num_base_bdevs_discovered": 3, 00:16:58.410 "num_base_bdevs_operational": 4, 00:16:58.410 "base_bdevs_list": [ 00:16:58.410 { 00:16:58.410 "name": "BaseBdev1", 00:16:58.410 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:16:58.410 "is_configured": true, 00:16:58.410 "data_offset": 0, 00:16:58.410 "data_size": 65536 00:16:58.410 }, 00:16:58.410 { 00:16:58.410 "name": null, 00:16:58.410 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:16:58.410 "is_configured": false, 00:16:58.410 "data_offset": 0, 00:16:58.410 "data_size": 65536 00:16:58.410 }, 00:16:58.410 { 00:16:58.410 "name": "BaseBdev3", 00:16:58.410 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:16:58.410 "is_configured": true, 00:16:58.410 "data_offset": 0, 00:16:58.410 "data_size": 65536 00:16:58.410 }, 00:16:58.410 { 00:16:58.410 "name": "BaseBdev4", 00:16:58.410 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:16:58.410 "is_configured": true, 00:16:58.410 "data_offset": 0, 00:16:58.410 "data_size": 65536 00:16:58.410 } 00:16:58.410 ] 00:16:58.410 }' 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.410 12:47:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.977 [2024-11-06 12:47:47.507206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.977 "name": "Existed_Raid", 00:16:58.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.977 "strip_size_kb": 64, 00:16:58.977 "state": "configuring", 00:16:58.977 "raid_level": "raid5f", 00:16:58.977 "superblock": false, 00:16:58.977 "num_base_bdevs": 4, 00:16:58.977 "num_base_bdevs_discovered": 2, 00:16:58.977 "num_base_bdevs_operational": 4, 00:16:58.977 "base_bdevs_list": [ 00:16:58.977 { 00:16:58.977 "name": "BaseBdev1", 00:16:58.977 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:16:58.977 "is_configured": true, 00:16:58.977 "data_offset": 0, 00:16:58.977 "data_size": 65536 00:16:58.977 }, 00:16:58.977 { 00:16:58.977 "name": null, 00:16:58.977 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:16:58.977 "is_configured": false, 00:16:58.977 "data_offset": 0, 00:16:58.977 "data_size": 65536 00:16:58.977 }, 00:16:58.977 { 00:16:58.977 "name": null, 00:16:58.977 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:16:58.977 "is_configured": false, 00:16:58.977 "data_offset": 0, 00:16:58.977 "data_size": 65536 00:16:58.977 }, 00:16:58.977 { 00:16:58.977 "name": "BaseBdev4", 00:16:58.977 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:16:58.977 "is_configured": true, 00:16:58.977 "data_offset": 0, 00:16:58.977 "data_size": 65536 00:16:58.977 } 00:16:58.977 ] 00:16:58.977 }' 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.977 12:47:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.544 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.544 [2024-11-06 12:47:48.067280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.545 "name": "Existed_Raid", 00:16:59.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.545 "strip_size_kb": 64, 00:16:59.545 "state": "configuring", 00:16:59.545 "raid_level": "raid5f", 00:16:59.545 "superblock": false, 00:16:59.545 "num_base_bdevs": 4, 00:16:59.545 "num_base_bdevs_discovered": 3, 00:16:59.545 "num_base_bdevs_operational": 4, 00:16:59.545 "base_bdevs_list": [ 00:16:59.545 { 00:16:59.545 "name": "BaseBdev1", 00:16:59.545 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:16:59.545 "is_configured": true, 00:16:59.545 "data_offset": 0, 00:16:59.545 "data_size": 65536 00:16:59.545 }, 00:16:59.545 { 00:16:59.545 "name": null, 00:16:59.545 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:16:59.545 "is_configured": false, 00:16:59.545 "data_offset": 0, 00:16:59.545 "data_size": 65536 00:16:59.545 }, 00:16:59.545 { 00:16:59.545 "name": "BaseBdev3", 00:16:59.545 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:16:59.545 "is_configured": true, 00:16:59.545 "data_offset": 0, 00:16:59.545 "data_size": 65536 00:16:59.545 }, 00:16:59.545 { 00:16:59.545 "name": "BaseBdev4", 00:16:59.545 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:16:59.545 "is_configured": true, 00:16:59.545 "data_offset": 0, 00:16:59.545 "data_size": 65536 00:16:59.545 } 00:16:59.545 ] 00:16:59.545 }' 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.545 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.112 [2024-11-06 12:47:48.643599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.112 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.370 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.370 "name": "Existed_Raid", 00:17:00.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.370 "strip_size_kb": 64, 00:17:00.370 "state": "configuring", 00:17:00.370 "raid_level": "raid5f", 00:17:00.370 "superblock": false, 00:17:00.370 "num_base_bdevs": 4, 00:17:00.370 "num_base_bdevs_discovered": 2, 00:17:00.370 "num_base_bdevs_operational": 4, 00:17:00.370 "base_bdevs_list": [ 00:17:00.370 { 00:17:00.370 "name": null, 00:17:00.370 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:17:00.370 "is_configured": false, 00:17:00.370 "data_offset": 0, 00:17:00.370 "data_size": 65536 00:17:00.370 }, 00:17:00.370 { 00:17:00.370 "name": null, 00:17:00.370 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:17:00.370 "is_configured": false, 00:17:00.370 "data_offset": 0, 00:17:00.370 "data_size": 65536 00:17:00.370 }, 00:17:00.370 { 00:17:00.370 "name": "BaseBdev3", 00:17:00.370 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:17:00.370 "is_configured": true, 00:17:00.370 "data_offset": 0, 00:17:00.370 "data_size": 65536 00:17:00.370 }, 00:17:00.370 { 00:17:00.370 "name": "BaseBdev4", 00:17:00.370 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:17:00.370 "is_configured": true, 00:17:00.370 "data_offset": 0, 00:17:00.370 "data_size": 65536 00:17:00.370 } 00:17:00.370 ] 00:17:00.370 }' 00:17:00.370 12:47:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.370 12:47:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.629 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.629 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.629 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.629 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.889 [2024-11-06 12:47:49.330503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.889 "name": "Existed_Raid", 00:17:00.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.889 "strip_size_kb": 64, 00:17:00.889 "state": "configuring", 00:17:00.889 "raid_level": "raid5f", 00:17:00.889 "superblock": false, 00:17:00.889 "num_base_bdevs": 4, 00:17:00.889 "num_base_bdevs_discovered": 3, 00:17:00.889 "num_base_bdevs_operational": 4, 00:17:00.889 "base_bdevs_list": [ 00:17:00.889 { 00:17:00.889 "name": null, 00:17:00.889 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:17:00.889 "is_configured": false, 00:17:00.889 "data_offset": 0, 00:17:00.889 "data_size": 65536 00:17:00.889 }, 00:17:00.889 { 00:17:00.889 "name": "BaseBdev2", 00:17:00.889 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:17:00.889 "is_configured": true, 00:17:00.889 "data_offset": 0, 00:17:00.889 "data_size": 65536 00:17:00.889 }, 00:17:00.889 { 00:17:00.889 "name": "BaseBdev3", 00:17:00.889 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:17:00.889 "is_configured": true, 00:17:00.889 "data_offset": 0, 00:17:00.889 "data_size": 65536 00:17:00.889 }, 00:17:00.889 { 00:17:00.889 "name": "BaseBdev4", 00:17:00.889 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:17:00.889 "is_configured": true, 00:17:00.889 "data_offset": 0, 00:17:00.889 "data_size": 65536 00:17:00.889 } 00:17:00.889 ] 00:17:00.889 }' 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.889 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a742482f-afe2-46da-b6da-8bd2b5c43b63 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.458 12:47:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.458 [2024-11-06 12:47:50.010861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:01.458 [2024-11-06 12:47:50.010977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.458 [2024-11-06 12:47:50.010995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:01.458 [2024-11-06 12:47:50.011388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:01.458 [2024-11-06 12:47:50.018410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.458 [2024-11-06 12:47:50.018442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:01.458 [2024-11-06 12:47:50.018804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.458 NewBaseBdev 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.458 [ 00:17:01.458 { 00:17:01.458 "name": "NewBaseBdev", 00:17:01.458 "aliases": [ 00:17:01.458 "a742482f-afe2-46da-b6da-8bd2b5c43b63" 00:17:01.458 ], 00:17:01.458 "product_name": "Malloc disk", 00:17:01.458 "block_size": 512, 00:17:01.458 "num_blocks": 65536, 00:17:01.458 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:17:01.458 "assigned_rate_limits": { 00:17:01.458 "rw_ios_per_sec": 0, 00:17:01.458 "rw_mbytes_per_sec": 0, 00:17:01.458 "r_mbytes_per_sec": 0, 00:17:01.458 "w_mbytes_per_sec": 0 00:17:01.458 }, 00:17:01.458 "claimed": true, 00:17:01.458 "claim_type": "exclusive_write", 00:17:01.458 "zoned": false, 00:17:01.458 "supported_io_types": { 00:17:01.458 "read": true, 00:17:01.458 "write": true, 00:17:01.458 "unmap": true, 00:17:01.458 "flush": true, 00:17:01.458 "reset": true, 00:17:01.458 "nvme_admin": false, 00:17:01.458 "nvme_io": false, 00:17:01.458 "nvme_io_md": false, 00:17:01.458 "write_zeroes": true, 00:17:01.458 "zcopy": true, 00:17:01.458 "get_zone_info": false, 00:17:01.458 "zone_management": false, 00:17:01.458 "zone_append": false, 00:17:01.458 "compare": false, 00:17:01.458 "compare_and_write": false, 00:17:01.458 "abort": true, 00:17:01.458 "seek_hole": false, 00:17:01.458 "seek_data": false, 00:17:01.458 "copy": true, 00:17:01.458 "nvme_iov_md": false 00:17:01.458 }, 00:17:01.458 "memory_domains": [ 00:17:01.458 { 00:17:01.458 "dma_device_id": "system", 00:17:01.458 "dma_device_type": 1 00:17:01.458 }, 00:17:01.458 { 00:17:01.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.458 "dma_device_type": 2 00:17:01.458 } 00:17:01.458 ], 00:17:01.458 "driver_specific": {} 00:17:01.458 } 00:17:01.458 ] 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.458 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.459 "name": "Existed_Raid", 00:17:01.459 "uuid": "cefd0f80-404d-452e-a6ae-0a9060175bd7", 00:17:01.459 "strip_size_kb": 64, 00:17:01.459 "state": "online", 00:17:01.459 "raid_level": "raid5f", 00:17:01.459 "superblock": false, 00:17:01.459 "num_base_bdevs": 4, 00:17:01.459 "num_base_bdevs_discovered": 4, 00:17:01.459 "num_base_bdevs_operational": 4, 00:17:01.459 "base_bdevs_list": [ 00:17:01.459 { 00:17:01.459 "name": "NewBaseBdev", 00:17:01.459 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:17:01.459 "is_configured": true, 00:17:01.459 "data_offset": 0, 00:17:01.459 "data_size": 65536 00:17:01.459 }, 00:17:01.459 { 00:17:01.459 "name": "BaseBdev2", 00:17:01.459 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:17:01.459 "is_configured": true, 00:17:01.459 "data_offset": 0, 00:17:01.459 "data_size": 65536 00:17:01.459 }, 00:17:01.459 { 00:17:01.459 "name": "BaseBdev3", 00:17:01.459 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:17:01.459 "is_configured": true, 00:17:01.459 "data_offset": 0, 00:17:01.459 "data_size": 65536 00:17:01.459 }, 00:17:01.459 { 00:17:01.459 "name": "BaseBdev4", 00:17:01.459 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:17:01.459 "is_configured": true, 00:17:01.459 "data_offset": 0, 00:17:01.459 "data_size": 65536 00:17:01.459 } 00:17:01.459 ] 00:17:01.459 }' 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.459 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.074 [2024-11-06 12:47:50.611674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.074 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.074 "name": "Existed_Raid", 00:17:02.074 "aliases": [ 00:17:02.074 "cefd0f80-404d-452e-a6ae-0a9060175bd7" 00:17:02.074 ], 00:17:02.074 "product_name": "Raid Volume", 00:17:02.074 "block_size": 512, 00:17:02.074 "num_blocks": 196608, 00:17:02.074 "uuid": "cefd0f80-404d-452e-a6ae-0a9060175bd7", 00:17:02.074 "assigned_rate_limits": { 00:17:02.074 "rw_ios_per_sec": 0, 00:17:02.074 "rw_mbytes_per_sec": 0, 00:17:02.074 "r_mbytes_per_sec": 0, 00:17:02.074 "w_mbytes_per_sec": 0 00:17:02.074 }, 00:17:02.074 "claimed": false, 00:17:02.074 "zoned": false, 00:17:02.074 "supported_io_types": { 00:17:02.074 "read": true, 00:17:02.074 "write": true, 00:17:02.074 "unmap": false, 00:17:02.075 "flush": false, 00:17:02.075 "reset": true, 00:17:02.075 "nvme_admin": false, 00:17:02.075 "nvme_io": false, 00:17:02.075 "nvme_io_md": false, 00:17:02.075 "write_zeroes": true, 00:17:02.075 "zcopy": false, 00:17:02.075 "get_zone_info": false, 00:17:02.075 "zone_management": false, 00:17:02.075 "zone_append": false, 00:17:02.075 "compare": false, 00:17:02.075 "compare_and_write": false, 00:17:02.075 "abort": false, 00:17:02.075 "seek_hole": false, 00:17:02.075 "seek_data": false, 00:17:02.075 "copy": false, 00:17:02.075 "nvme_iov_md": false 00:17:02.075 }, 00:17:02.075 "driver_specific": { 00:17:02.075 "raid": { 00:17:02.075 "uuid": "cefd0f80-404d-452e-a6ae-0a9060175bd7", 00:17:02.075 "strip_size_kb": 64, 00:17:02.075 "state": "online", 00:17:02.075 "raid_level": "raid5f", 00:17:02.075 "superblock": false, 00:17:02.075 "num_base_bdevs": 4, 00:17:02.075 "num_base_bdevs_discovered": 4, 00:17:02.075 "num_base_bdevs_operational": 4, 00:17:02.075 "base_bdevs_list": [ 00:17:02.075 { 00:17:02.075 "name": "NewBaseBdev", 00:17:02.075 "uuid": "a742482f-afe2-46da-b6da-8bd2b5c43b63", 00:17:02.075 "is_configured": true, 00:17:02.075 "data_offset": 0, 00:17:02.075 "data_size": 65536 00:17:02.075 }, 00:17:02.075 { 00:17:02.075 "name": "BaseBdev2", 00:17:02.075 "uuid": "17de904e-4f07-4fc8-990c-a6ec4bb19790", 00:17:02.075 "is_configured": true, 00:17:02.075 "data_offset": 0, 00:17:02.075 "data_size": 65536 00:17:02.075 }, 00:17:02.075 { 00:17:02.075 "name": "BaseBdev3", 00:17:02.075 "uuid": "849a12d7-c5e4-47f8-b4fb-24e48e347fe4", 00:17:02.075 "is_configured": true, 00:17:02.075 "data_offset": 0, 00:17:02.075 "data_size": 65536 00:17:02.075 }, 00:17:02.075 { 00:17:02.075 "name": "BaseBdev4", 00:17:02.075 "uuid": "4c1f09d1-438c-40af-b499-049a5c670067", 00:17:02.075 "is_configured": true, 00:17:02.075 "data_offset": 0, 00:17:02.075 "data_size": 65536 00:17:02.075 } 00:17:02.075 ] 00:17:02.075 } 00:17:02.075 } 00:17:02.075 }' 00:17:02.075 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.075 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:02.075 BaseBdev2 00:17:02.075 BaseBdev3 00:17:02.075 BaseBdev4' 00:17:02.075 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.356 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.357 12:47:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.357 [2024-11-06 12:47:51.003442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.357 [2024-11-06 12:47:51.003515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.357 [2024-11-06 12:47:51.003659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.357 [2024-11-06 12:47:51.004093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.357 [2024-11-06 12:47:51.004120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:02.357 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.357 12:47:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83242 00:17:02.357 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83242 ']' 00:17:02.357 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83242 00:17:02.357 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83242 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:02.615 killing process with pid 83242 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83242' 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83242 00:17:02.615 12:47:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83242 00:17:02.615 [2024-11-06 12:47:51.041498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.874 [2024-11-06 12:47:51.502132] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.249 00:17:04.249 real 0m13.484s 00:17:04.249 user 0m22.017s 00:17:04.249 sys 0m1.930s 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.249 ************************************ 00:17:04.249 END TEST raid5f_state_function_test 00:17:04.249 ************************************ 00:17:04.249 12:47:52 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:04.249 12:47:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:04.249 12:47:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:04.249 12:47:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.249 ************************************ 00:17:04.249 START TEST raid5f_state_function_test_sb 00:17:04.249 ************************************ 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.249 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83926 00:17:04.250 Process raid pid: 83926 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83926' 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83926 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83926 ']' 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:04.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:04.250 12:47:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.250 [2024-11-06 12:47:52.788479] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:17:04.250 [2024-11-06 12:47:52.788651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.508 [2024-11-06 12:47:52.964610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.508 [2024-11-06 12:47:53.117990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.766 [2024-11-06 12:47:53.350417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.766 [2024-11-06 12:47:53.350487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.364 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 [2024-11-06 12:47:53.822889] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.365 [2024-11-06 12:47:53.823002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.365 [2024-11-06 12:47:53.823020] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.365 [2024-11-06 12:47:53.823036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.365 [2024-11-06 12:47:53.823047] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.365 [2024-11-06 12:47:53.823062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.365 [2024-11-06 12:47:53.823072] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.365 [2024-11-06 12:47:53.823086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.365 "name": "Existed_Raid", 00:17:05.365 "uuid": "37463eb1-d2fb-48a4-a5a2-ce6d6b989a5e", 00:17:05.365 "strip_size_kb": 64, 00:17:05.365 "state": "configuring", 00:17:05.365 "raid_level": "raid5f", 00:17:05.365 "superblock": true, 00:17:05.365 "num_base_bdevs": 4, 00:17:05.365 "num_base_bdevs_discovered": 0, 00:17:05.365 "num_base_bdevs_operational": 4, 00:17:05.365 "base_bdevs_list": [ 00:17:05.365 { 00:17:05.365 "name": "BaseBdev1", 00:17:05.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.365 "is_configured": false, 00:17:05.365 "data_offset": 0, 00:17:05.365 "data_size": 0 00:17:05.365 }, 00:17:05.365 { 00:17:05.365 "name": "BaseBdev2", 00:17:05.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.365 "is_configured": false, 00:17:05.365 "data_offset": 0, 00:17:05.365 "data_size": 0 00:17:05.365 }, 00:17:05.365 { 00:17:05.365 "name": "BaseBdev3", 00:17:05.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.365 "is_configured": false, 00:17:05.365 "data_offset": 0, 00:17:05.365 "data_size": 0 00:17:05.365 }, 00:17:05.365 { 00:17:05.365 "name": "BaseBdev4", 00:17:05.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.365 "is_configured": false, 00:17:05.365 "data_offset": 0, 00:17:05.365 "data_size": 0 00:17:05.365 } 00:17:05.365 ] 00:17:05.365 }' 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.365 12:47:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 [2024-11-06 12:47:54.359030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.932 [2024-11-06 12:47:54.359115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 [2024-11-06 12:47:54.366939] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.932 [2024-11-06 12:47:54.367031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.932 [2024-11-06 12:47:54.367047] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.932 [2024-11-06 12:47:54.367064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.932 [2024-11-06 12:47:54.367074] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.932 [2024-11-06 12:47:54.367090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.932 [2024-11-06 12:47:54.367100] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.932 [2024-11-06 12:47:54.367115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 [2024-11-06 12:47:54.435985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.932 BaseBdev1 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 [ 00:17:05.932 { 00:17:05.932 "name": "BaseBdev1", 00:17:05.932 "aliases": [ 00:17:05.932 "a06875fb-5836-4c3c-a5d4-1c4d42f27d58" 00:17:05.932 ], 00:17:05.932 "product_name": "Malloc disk", 00:17:05.932 "block_size": 512, 00:17:05.932 "num_blocks": 65536, 00:17:05.932 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:05.932 "assigned_rate_limits": { 00:17:05.932 "rw_ios_per_sec": 0, 00:17:05.932 "rw_mbytes_per_sec": 0, 00:17:05.932 "r_mbytes_per_sec": 0, 00:17:05.932 "w_mbytes_per_sec": 0 00:17:05.932 }, 00:17:05.932 "claimed": true, 00:17:05.932 "claim_type": "exclusive_write", 00:17:05.932 "zoned": false, 00:17:05.932 "supported_io_types": { 00:17:05.932 "read": true, 00:17:05.932 "write": true, 00:17:05.932 "unmap": true, 00:17:05.932 "flush": true, 00:17:05.932 "reset": true, 00:17:05.932 "nvme_admin": false, 00:17:05.932 "nvme_io": false, 00:17:05.932 "nvme_io_md": false, 00:17:05.932 "write_zeroes": true, 00:17:05.932 "zcopy": true, 00:17:05.932 "get_zone_info": false, 00:17:05.932 "zone_management": false, 00:17:05.932 "zone_append": false, 00:17:05.932 "compare": false, 00:17:05.932 "compare_and_write": false, 00:17:05.932 "abort": true, 00:17:05.932 "seek_hole": false, 00:17:05.932 "seek_data": false, 00:17:05.932 "copy": true, 00:17:05.932 "nvme_iov_md": false 00:17:05.932 }, 00:17:05.932 "memory_domains": [ 00:17:05.932 { 00:17:05.932 "dma_device_id": "system", 00:17:05.932 "dma_device_type": 1 00:17:05.932 }, 00:17:05.932 { 00:17:05.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.932 "dma_device_type": 2 00:17:05.932 } 00:17:05.932 ], 00:17:05.932 "driver_specific": {} 00:17:05.932 } 00:17:05.932 ] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.932 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.932 "name": "Existed_Raid", 00:17:05.932 "uuid": "662b2708-3f61-4e70-8b89-0f4c2d795f13", 00:17:05.932 "strip_size_kb": 64, 00:17:05.932 "state": "configuring", 00:17:05.932 "raid_level": "raid5f", 00:17:05.932 "superblock": true, 00:17:05.932 "num_base_bdevs": 4, 00:17:05.932 "num_base_bdevs_discovered": 1, 00:17:05.932 "num_base_bdevs_operational": 4, 00:17:05.932 "base_bdevs_list": [ 00:17:05.932 { 00:17:05.932 "name": "BaseBdev1", 00:17:05.932 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:05.932 "is_configured": true, 00:17:05.932 "data_offset": 2048, 00:17:05.932 "data_size": 63488 00:17:05.932 }, 00:17:05.932 { 00:17:05.932 "name": "BaseBdev2", 00:17:05.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.932 "is_configured": false, 00:17:05.932 "data_offset": 0, 00:17:05.932 "data_size": 0 00:17:05.932 }, 00:17:05.932 { 00:17:05.932 "name": "BaseBdev3", 00:17:05.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.932 "is_configured": false, 00:17:05.932 "data_offset": 0, 00:17:05.932 "data_size": 0 00:17:05.932 }, 00:17:05.932 { 00:17:05.932 "name": "BaseBdev4", 00:17:05.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.932 "is_configured": false, 00:17:05.932 "data_offset": 0, 00:17:05.933 "data_size": 0 00:17:05.933 } 00:17:05.933 ] 00:17:05.933 }' 00:17:05.933 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.933 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.499 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:06.499 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.499 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.500 [2024-11-06 12:47:54.972170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.500 [2024-11-06 12:47:54.972305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.500 [2024-11-06 12:47:54.980182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.500 [2024-11-06 12:47:54.983037] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.500 [2024-11-06 12:47:54.983119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.500 [2024-11-06 12:47:54.983139] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.500 [2024-11-06 12:47:54.983157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.500 [2024-11-06 12:47:54.983167] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.500 [2024-11-06 12:47:54.983181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.500 12:47:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.500 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.500 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.500 "name": "Existed_Raid", 00:17:06.500 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:06.500 "strip_size_kb": 64, 00:17:06.500 "state": "configuring", 00:17:06.500 "raid_level": "raid5f", 00:17:06.500 "superblock": true, 00:17:06.500 "num_base_bdevs": 4, 00:17:06.500 "num_base_bdevs_discovered": 1, 00:17:06.500 "num_base_bdevs_operational": 4, 00:17:06.500 "base_bdevs_list": [ 00:17:06.500 { 00:17:06.500 "name": "BaseBdev1", 00:17:06.500 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:06.500 "is_configured": true, 00:17:06.500 "data_offset": 2048, 00:17:06.500 "data_size": 63488 00:17:06.500 }, 00:17:06.500 { 00:17:06.500 "name": "BaseBdev2", 00:17:06.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.500 "is_configured": false, 00:17:06.500 "data_offset": 0, 00:17:06.500 "data_size": 0 00:17:06.500 }, 00:17:06.500 { 00:17:06.500 "name": "BaseBdev3", 00:17:06.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.500 "is_configured": false, 00:17:06.500 "data_offset": 0, 00:17:06.500 "data_size": 0 00:17:06.500 }, 00:17:06.500 { 00:17:06.500 "name": "BaseBdev4", 00:17:06.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.500 "is_configured": false, 00:17:06.500 "data_offset": 0, 00:17:06.500 "data_size": 0 00:17:06.500 } 00:17:06.500 ] 00:17:06.500 }' 00:17:06.500 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.500 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.067 [2024-11-06 12:47:55.553830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.067 BaseBdev2 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.067 [ 00:17:07.067 { 00:17:07.067 "name": "BaseBdev2", 00:17:07.067 "aliases": [ 00:17:07.067 "7dc7287d-672f-4b1c-960d-8cb37046496f" 00:17:07.067 ], 00:17:07.067 "product_name": "Malloc disk", 00:17:07.067 "block_size": 512, 00:17:07.067 "num_blocks": 65536, 00:17:07.067 "uuid": "7dc7287d-672f-4b1c-960d-8cb37046496f", 00:17:07.067 "assigned_rate_limits": { 00:17:07.067 "rw_ios_per_sec": 0, 00:17:07.067 "rw_mbytes_per_sec": 0, 00:17:07.067 "r_mbytes_per_sec": 0, 00:17:07.067 "w_mbytes_per_sec": 0 00:17:07.067 }, 00:17:07.067 "claimed": true, 00:17:07.067 "claim_type": "exclusive_write", 00:17:07.067 "zoned": false, 00:17:07.067 "supported_io_types": { 00:17:07.067 "read": true, 00:17:07.067 "write": true, 00:17:07.067 "unmap": true, 00:17:07.067 "flush": true, 00:17:07.067 "reset": true, 00:17:07.067 "nvme_admin": false, 00:17:07.067 "nvme_io": false, 00:17:07.067 "nvme_io_md": false, 00:17:07.067 "write_zeroes": true, 00:17:07.067 "zcopy": true, 00:17:07.067 "get_zone_info": false, 00:17:07.067 "zone_management": false, 00:17:07.067 "zone_append": false, 00:17:07.067 "compare": false, 00:17:07.067 "compare_and_write": false, 00:17:07.067 "abort": true, 00:17:07.067 "seek_hole": false, 00:17:07.067 "seek_data": false, 00:17:07.067 "copy": true, 00:17:07.067 "nvme_iov_md": false 00:17:07.067 }, 00:17:07.067 "memory_domains": [ 00:17:07.067 { 00:17:07.067 "dma_device_id": "system", 00:17:07.067 "dma_device_type": 1 00:17:07.067 }, 00:17:07.067 { 00:17:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.067 "dma_device_type": 2 00:17:07.067 } 00:17:07.067 ], 00:17:07.067 "driver_specific": {} 00:17:07.067 } 00:17:07.067 ] 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.067 "name": "Existed_Raid", 00:17:07.067 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:07.067 "strip_size_kb": 64, 00:17:07.067 "state": "configuring", 00:17:07.067 "raid_level": "raid5f", 00:17:07.067 "superblock": true, 00:17:07.067 "num_base_bdevs": 4, 00:17:07.067 "num_base_bdevs_discovered": 2, 00:17:07.067 "num_base_bdevs_operational": 4, 00:17:07.067 "base_bdevs_list": [ 00:17:07.067 { 00:17:07.067 "name": "BaseBdev1", 00:17:07.067 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:07.067 "is_configured": true, 00:17:07.067 "data_offset": 2048, 00:17:07.067 "data_size": 63488 00:17:07.067 }, 00:17:07.067 { 00:17:07.067 "name": "BaseBdev2", 00:17:07.067 "uuid": "7dc7287d-672f-4b1c-960d-8cb37046496f", 00:17:07.067 "is_configured": true, 00:17:07.067 "data_offset": 2048, 00:17:07.067 "data_size": 63488 00:17:07.067 }, 00:17:07.067 { 00:17:07.067 "name": "BaseBdev3", 00:17:07.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.067 "is_configured": false, 00:17:07.067 "data_offset": 0, 00:17:07.067 "data_size": 0 00:17:07.067 }, 00:17:07.067 { 00:17:07.067 "name": "BaseBdev4", 00:17:07.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.067 "is_configured": false, 00:17:07.067 "data_offset": 0, 00:17:07.067 "data_size": 0 00:17:07.067 } 00:17:07.067 ] 00:17:07.067 }' 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.067 12:47:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 [2024-11-06 12:47:56.150974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.633 BaseBdev3 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 [ 00:17:07.633 { 00:17:07.633 "name": "BaseBdev3", 00:17:07.633 "aliases": [ 00:17:07.633 "9774e755-1229-4118-b0a1-0e251c31c723" 00:17:07.633 ], 00:17:07.633 "product_name": "Malloc disk", 00:17:07.633 "block_size": 512, 00:17:07.633 "num_blocks": 65536, 00:17:07.633 "uuid": "9774e755-1229-4118-b0a1-0e251c31c723", 00:17:07.633 "assigned_rate_limits": { 00:17:07.633 "rw_ios_per_sec": 0, 00:17:07.633 "rw_mbytes_per_sec": 0, 00:17:07.633 "r_mbytes_per_sec": 0, 00:17:07.633 "w_mbytes_per_sec": 0 00:17:07.633 }, 00:17:07.633 "claimed": true, 00:17:07.633 "claim_type": "exclusive_write", 00:17:07.633 "zoned": false, 00:17:07.633 "supported_io_types": { 00:17:07.633 "read": true, 00:17:07.633 "write": true, 00:17:07.633 "unmap": true, 00:17:07.633 "flush": true, 00:17:07.633 "reset": true, 00:17:07.633 "nvme_admin": false, 00:17:07.633 "nvme_io": false, 00:17:07.633 "nvme_io_md": false, 00:17:07.633 "write_zeroes": true, 00:17:07.633 "zcopy": true, 00:17:07.633 "get_zone_info": false, 00:17:07.633 "zone_management": false, 00:17:07.633 "zone_append": false, 00:17:07.633 "compare": false, 00:17:07.633 "compare_and_write": false, 00:17:07.633 "abort": true, 00:17:07.633 "seek_hole": false, 00:17:07.633 "seek_data": false, 00:17:07.633 "copy": true, 00:17:07.633 "nvme_iov_md": false 00:17:07.633 }, 00:17:07.633 "memory_domains": [ 00:17:07.633 { 00:17:07.633 "dma_device_id": "system", 00:17:07.633 "dma_device_type": 1 00:17:07.633 }, 00:17:07.633 { 00:17:07.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.633 "dma_device_type": 2 00:17:07.633 } 00:17:07.633 ], 00:17:07.633 "driver_specific": {} 00:17:07.633 } 00:17:07.633 ] 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.633 "name": "Existed_Raid", 00:17:07.633 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:07.633 "strip_size_kb": 64, 00:17:07.633 "state": "configuring", 00:17:07.633 "raid_level": "raid5f", 00:17:07.633 "superblock": true, 00:17:07.633 "num_base_bdevs": 4, 00:17:07.633 "num_base_bdevs_discovered": 3, 00:17:07.633 "num_base_bdevs_operational": 4, 00:17:07.633 "base_bdevs_list": [ 00:17:07.633 { 00:17:07.633 "name": "BaseBdev1", 00:17:07.633 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:07.633 "is_configured": true, 00:17:07.633 "data_offset": 2048, 00:17:07.633 "data_size": 63488 00:17:07.633 }, 00:17:07.633 { 00:17:07.633 "name": "BaseBdev2", 00:17:07.633 "uuid": "7dc7287d-672f-4b1c-960d-8cb37046496f", 00:17:07.633 "is_configured": true, 00:17:07.633 "data_offset": 2048, 00:17:07.633 "data_size": 63488 00:17:07.633 }, 00:17:07.633 { 00:17:07.633 "name": "BaseBdev3", 00:17:07.633 "uuid": "9774e755-1229-4118-b0a1-0e251c31c723", 00:17:07.633 "is_configured": true, 00:17:07.633 "data_offset": 2048, 00:17:07.633 "data_size": 63488 00:17:07.633 }, 00:17:07.633 { 00:17:07.633 "name": "BaseBdev4", 00:17:07.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.633 "is_configured": false, 00:17:07.633 "data_offset": 0, 00:17:07.633 "data_size": 0 00:17:07.633 } 00:17:07.633 ] 00:17:07.633 }' 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.633 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.200 [2024-11-06 12:47:56.781870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:08.200 [2024-11-06 12:47:56.782298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.200 [2024-11-06 12:47:56.782337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:08.200 BaseBdev4 00:17:08.200 [2024-11-06 12:47:56.782708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.200 [2024-11-06 12:47:56.789978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.200 [2024-11-06 12:47:56.790012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:08.200 [2024-11-06 12:47:56.790346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.200 [ 00:17:08.200 { 00:17:08.200 "name": "BaseBdev4", 00:17:08.200 "aliases": [ 00:17:08.200 "cd242dee-74d1-4675-bb5e-3f2eb47b8bca" 00:17:08.200 ], 00:17:08.200 "product_name": "Malloc disk", 00:17:08.200 "block_size": 512, 00:17:08.200 "num_blocks": 65536, 00:17:08.200 "uuid": "cd242dee-74d1-4675-bb5e-3f2eb47b8bca", 00:17:08.200 "assigned_rate_limits": { 00:17:08.200 "rw_ios_per_sec": 0, 00:17:08.200 "rw_mbytes_per_sec": 0, 00:17:08.200 "r_mbytes_per_sec": 0, 00:17:08.200 "w_mbytes_per_sec": 0 00:17:08.200 }, 00:17:08.200 "claimed": true, 00:17:08.200 "claim_type": "exclusive_write", 00:17:08.200 "zoned": false, 00:17:08.200 "supported_io_types": { 00:17:08.200 "read": true, 00:17:08.200 "write": true, 00:17:08.200 "unmap": true, 00:17:08.200 "flush": true, 00:17:08.200 "reset": true, 00:17:08.200 "nvme_admin": false, 00:17:08.200 "nvme_io": false, 00:17:08.200 "nvme_io_md": false, 00:17:08.200 "write_zeroes": true, 00:17:08.200 "zcopy": true, 00:17:08.200 "get_zone_info": false, 00:17:08.200 "zone_management": false, 00:17:08.200 "zone_append": false, 00:17:08.200 "compare": false, 00:17:08.200 "compare_and_write": false, 00:17:08.200 "abort": true, 00:17:08.200 "seek_hole": false, 00:17:08.200 "seek_data": false, 00:17:08.200 "copy": true, 00:17:08.200 "nvme_iov_md": false 00:17:08.200 }, 00:17:08.200 "memory_domains": [ 00:17:08.200 { 00:17:08.200 "dma_device_id": "system", 00:17:08.200 "dma_device_type": 1 00:17:08.200 }, 00:17:08.200 { 00:17:08.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.200 "dma_device_type": 2 00:17:08.200 } 00:17:08.200 ], 00:17:08.200 "driver_specific": {} 00:17:08.200 } 00:17:08.200 ] 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:08.200 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.201 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.460 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.460 "name": "Existed_Raid", 00:17:08.460 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:08.460 "strip_size_kb": 64, 00:17:08.460 "state": "online", 00:17:08.460 "raid_level": "raid5f", 00:17:08.460 "superblock": true, 00:17:08.460 "num_base_bdevs": 4, 00:17:08.460 "num_base_bdevs_discovered": 4, 00:17:08.460 "num_base_bdevs_operational": 4, 00:17:08.460 "base_bdevs_list": [ 00:17:08.460 { 00:17:08.460 "name": "BaseBdev1", 00:17:08.460 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:08.460 "is_configured": true, 00:17:08.460 "data_offset": 2048, 00:17:08.460 "data_size": 63488 00:17:08.460 }, 00:17:08.460 { 00:17:08.460 "name": "BaseBdev2", 00:17:08.460 "uuid": "7dc7287d-672f-4b1c-960d-8cb37046496f", 00:17:08.460 "is_configured": true, 00:17:08.460 "data_offset": 2048, 00:17:08.460 "data_size": 63488 00:17:08.460 }, 00:17:08.460 { 00:17:08.460 "name": "BaseBdev3", 00:17:08.460 "uuid": "9774e755-1229-4118-b0a1-0e251c31c723", 00:17:08.460 "is_configured": true, 00:17:08.460 "data_offset": 2048, 00:17:08.460 "data_size": 63488 00:17:08.460 }, 00:17:08.460 { 00:17:08.460 "name": "BaseBdev4", 00:17:08.460 "uuid": "cd242dee-74d1-4675-bb5e-3f2eb47b8bca", 00:17:08.460 "is_configured": true, 00:17:08.460 "data_offset": 2048, 00:17:08.460 "data_size": 63488 00:17:08.460 } 00:17:08.460 ] 00:17:08.460 }' 00:17:08.460 12:47:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.460 12:47:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.718 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.718 [2024-11-06 12:47:57.342969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.719 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.978 "name": "Existed_Raid", 00:17:08.978 "aliases": [ 00:17:08.978 "9030ef2f-eb00-4519-ba54-c575a0ae2a18" 00:17:08.978 ], 00:17:08.978 "product_name": "Raid Volume", 00:17:08.978 "block_size": 512, 00:17:08.978 "num_blocks": 190464, 00:17:08.978 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:08.978 "assigned_rate_limits": { 00:17:08.978 "rw_ios_per_sec": 0, 00:17:08.978 "rw_mbytes_per_sec": 0, 00:17:08.978 "r_mbytes_per_sec": 0, 00:17:08.978 "w_mbytes_per_sec": 0 00:17:08.978 }, 00:17:08.978 "claimed": false, 00:17:08.978 "zoned": false, 00:17:08.978 "supported_io_types": { 00:17:08.978 "read": true, 00:17:08.978 "write": true, 00:17:08.978 "unmap": false, 00:17:08.978 "flush": false, 00:17:08.978 "reset": true, 00:17:08.978 "nvme_admin": false, 00:17:08.978 "nvme_io": false, 00:17:08.978 "nvme_io_md": false, 00:17:08.978 "write_zeroes": true, 00:17:08.978 "zcopy": false, 00:17:08.978 "get_zone_info": false, 00:17:08.978 "zone_management": false, 00:17:08.978 "zone_append": false, 00:17:08.978 "compare": false, 00:17:08.978 "compare_and_write": false, 00:17:08.978 "abort": false, 00:17:08.978 "seek_hole": false, 00:17:08.978 "seek_data": false, 00:17:08.978 "copy": false, 00:17:08.978 "nvme_iov_md": false 00:17:08.978 }, 00:17:08.978 "driver_specific": { 00:17:08.978 "raid": { 00:17:08.978 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:08.978 "strip_size_kb": 64, 00:17:08.978 "state": "online", 00:17:08.978 "raid_level": "raid5f", 00:17:08.978 "superblock": true, 00:17:08.978 "num_base_bdevs": 4, 00:17:08.978 "num_base_bdevs_discovered": 4, 00:17:08.978 "num_base_bdevs_operational": 4, 00:17:08.978 "base_bdevs_list": [ 00:17:08.978 { 00:17:08.978 "name": "BaseBdev1", 00:17:08.978 "uuid": "a06875fb-5836-4c3c-a5d4-1c4d42f27d58", 00:17:08.978 "is_configured": true, 00:17:08.978 "data_offset": 2048, 00:17:08.978 "data_size": 63488 00:17:08.978 }, 00:17:08.978 { 00:17:08.978 "name": "BaseBdev2", 00:17:08.978 "uuid": "7dc7287d-672f-4b1c-960d-8cb37046496f", 00:17:08.978 "is_configured": true, 00:17:08.978 "data_offset": 2048, 00:17:08.978 "data_size": 63488 00:17:08.978 }, 00:17:08.978 { 00:17:08.978 "name": "BaseBdev3", 00:17:08.978 "uuid": "9774e755-1229-4118-b0a1-0e251c31c723", 00:17:08.978 "is_configured": true, 00:17:08.978 "data_offset": 2048, 00:17:08.978 "data_size": 63488 00:17:08.978 }, 00:17:08.978 { 00:17:08.978 "name": "BaseBdev4", 00:17:08.978 "uuid": "cd242dee-74d1-4675-bb5e-3f2eb47b8bca", 00:17:08.978 "is_configured": true, 00:17:08.978 "data_offset": 2048, 00:17:08.978 "data_size": 63488 00:17:08.978 } 00:17:08.978 ] 00:17:08.978 } 00:17:08.978 } 00:17:08.978 }' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:08.978 BaseBdev2 00:17:08.978 BaseBdev3 00:17:08.978 BaseBdev4' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.978 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.237 [2024-11-06 12:47:57.726872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.237 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.238 "name": "Existed_Raid", 00:17:09.238 "uuid": "9030ef2f-eb00-4519-ba54-c575a0ae2a18", 00:17:09.238 "strip_size_kb": 64, 00:17:09.238 "state": "online", 00:17:09.238 "raid_level": "raid5f", 00:17:09.238 "superblock": true, 00:17:09.238 "num_base_bdevs": 4, 00:17:09.238 "num_base_bdevs_discovered": 3, 00:17:09.238 "num_base_bdevs_operational": 3, 00:17:09.238 "base_bdevs_list": [ 00:17:09.238 { 00:17:09.238 "name": null, 00:17:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.238 "is_configured": false, 00:17:09.238 "data_offset": 0, 00:17:09.238 "data_size": 63488 00:17:09.238 }, 00:17:09.238 { 00:17:09.238 "name": "BaseBdev2", 00:17:09.238 "uuid": "7dc7287d-672f-4b1c-960d-8cb37046496f", 00:17:09.238 "is_configured": true, 00:17:09.238 "data_offset": 2048, 00:17:09.238 "data_size": 63488 00:17:09.238 }, 00:17:09.238 { 00:17:09.238 "name": "BaseBdev3", 00:17:09.238 "uuid": "9774e755-1229-4118-b0a1-0e251c31c723", 00:17:09.238 "is_configured": true, 00:17:09.238 "data_offset": 2048, 00:17:09.238 "data_size": 63488 00:17:09.238 }, 00:17:09.238 { 00:17:09.238 "name": "BaseBdev4", 00:17:09.238 "uuid": "cd242dee-74d1-4675-bb5e-3f2eb47b8bca", 00:17:09.238 "is_configured": true, 00:17:09.238 "data_offset": 2048, 00:17:09.238 "data_size": 63488 00:17:09.238 } 00:17:09.238 ] 00:17:09.238 }' 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.238 12:47:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.805 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 [2024-11-06 12:47:58.402044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.805 [2024-11-06 12:47:58.402361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.063 [2024-11-06 12:47:58.497515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 [2024-11-06 12:47:58.561521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 [2024-11-06 12:47:58.718909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:10.322 [2024-11-06 12:47:58.719003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 BaseBdev2 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 [ 00:17:10.322 { 00:17:10.322 "name": "BaseBdev2", 00:17:10.322 "aliases": [ 00:17:10.322 "de53ac72-5ad6-4db5-9126-5371f4395a60" 00:17:10.322 ], 00:17:10.322 "product_name": "Malloc disk", 00:17:10.322 "block_size": 512, 00:17:10.322 "num_blocks": 65536, 00:17:10.322 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:10.322 "assigned_rate_limits": { 00:17:10.322 "rw_ios_per_sec": 0, 00:17:10.322 "rw_mbytes_per_sec": 0, 00:17:10.322 "r_mbytes_per_sec": 0, 00:17:10.322 "w_mbytes_per_sec": 0 00:17:10.322 }, 00:17:10.322 "claimed": false, 00:17:10.322 "zoned": false, 00:17:10.322 "supported_io_types": { 00:17:10.322 "read": true, 00:17:10.322 "write": true, 00:17:10.322 "unmap": true, 00:17:10.322 "flush": true, 00:17:10.322 "reset": true, 00:17:10.322 "nvme_admin": false, 00:17:10.322 "nvme_io": false, 00:17:10.322 "nvme_io_md": false, 00:17:10.322 "write_zeroes": true, 00:17:10.322 "zcopy": true, 00:17:10.322 "get_zone_info": false, 00:17:10.322 "zone_management": false, 00:17:10.322 "zone_append": false, 00:17:10.322 "compare": false, 00:17:10.322 "compare_and_write": false, 00:17:10.322 "abort": true, 00:17:10.322 "seek_hole": false, 00:17:10.322 "seek_data": false, 00:17:10.322 "copy": true, 00:17:10.322 "nvme_iov_md": false 00:17:10.322 }, 00:17:10.322 "memory_domains": [ 00:17:10.322 { 00:17:10.322 "dma_device_id": "system", 00:17:10.322 "dma_device_type": 1 00:17:10.322 }, 00:17:10.322 { 00:17:10.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.322 "dma_device_type": 2 00:17:10.322 } 00:17:10.322 ], 00:17:10.322 "driver_specific": {} 00:17:10.322 } 00:17:10.322 ] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 BaseBdev3 00:17:10.581 12:47:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.581 12:47:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 [ 00:17:10.581 { 00:17:10.581 "name": "BaseBdev3", 00:17:10.581 "aliases": [ 00:17:10.581 "9da083db-7909-440e-811a-4d09d3b17b31" 00:17:10.581 ], 00:17:10.581 "product_name": "Malloc disk", 00:17:10.581 "block_size": 512, 00:17:10.581 "num_blocks": 65536, 00:17:10.581 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:10.581 "assigned_rate_limits": { 00:17:10.581 "rw_ios_per_sec": 0, 00:17:10.581 "rw_mbytes_per_sec": 0, 00:17:10.581 "r_mbytes_per_sec": 0, 00:17:10.581 "w_mbytes_per_sec": 0 00:17:10.581 }, 00:17:10.581 "claimed": false, 00:17:10.581 "zoned": false, 00:17:10.581 "supported_io_types": { 00:17:10.581 "read": true, 00:17:10.581 "write": true, 00:17:10.581 "unmap": true, 00:17:10.581 "flush": true, 00:17:10.581 "reset": true, 00:17:10.581 "nvme_admin": false, 00:17:10.581 "nvme_io": false, 00:17:10.581 "nvme_io_md": false, 00:17:10.581 "write_zeroes": true, 00:17:10.581 "zcopy": true, 00:17:10.581 "get_zone_info": false, 00:17:10.581 "zone_management": false, 00:17:10.581 "zone_append": false, 00:17:10.581 "compare": false, 00:17:10.581 "compare_and_write": false, 00:17:10.581 "abort": true, 00:17:10.581 "seek_hole": false, 00:17:10.581 "seek_data": false, 00:17:10.581 "copy": true, 00:17:10.581 "nvme_iov_md": false 00:17:10.581 }, 00:17:10.581 "memory_domains": [ 00:17:10.581 { 00:17:10.581 "dma_device_id": "system", 00:17:10.581 "dma_device_type": 1 00:17:10.581 }, 00:17:10.581 { 00:17:10.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.581 "dma_device_type": 2 00:17:10.581 } 00:17:10.581 ], 00:17:10.581 "driver_specific": {} 00:17:10.581 } 00:17:10.581 ] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 BaseBdev4 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.581 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 [ 00:17:10.581 { 00:17:10.581 "name": "BaseBdev4", 00:17:10.581 "aliases": [ 00:17:10.581 "5583fc2e-ceff-4e9e-bd69-1b8a998ded81" 00:17:10.581 ], 00:17:10.581 "product_name": "Malloc disk", 00:17:10.581 "block_size": 512, 00:17:10.581 "num_blocks": 65536, 00:17:10.581 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:10.581 "assigned_rate_limits": { 00:17:10.582 "rw_ios_per_sec": 0, 00:17:10.582 "rw_mbytes_per_sec": 0, 00:17:10.582 "r_mbytes_per_sec": 0, 00:17:10.582 "w_mbytes_per_sec": 0 00:17:10.582 }, 00:17:10.582 "claimed": false, 00:17:10.582 "zoned": false, 00:17:10.582 "supported_io_types": { 00:17:10.582 "read": true, 00:17:10.582 "write": true, 00:17:10.582 "unmap": true, 00:17:10.582 "flush": true, 00:17:10.582 "reset": true, 00:17:10.582 "nvme_admin": false, 00:17:10.582 "nvme_io": false, 00:17:10.582 "nvme_io_md": false, 00:17:10.582 "write_zeroes": true, 00:17:10.582 "zcopy": true, 00:17:10.582 "get_zone_info": false, 00:17:10.582 "zone_management": false, 00:17:10.582 "zone_append": false, 00:17:10.582 "compare": false, 00:17:10.582 "compare_and_write": false, 00:17:10.582 "abort": true, 00:17:10.582 "seek_hole": false, 00:17:10.582 "seek_data": false, 00:17:10.582 "copy": true, 00:17:10.582 "nvme_iov_md": false 00:17:10.582 }, 00:17:10.582 "memory_domains": [ 00:17:10.582 { 00:17:10.582 "dma_device_id": "system", 00:17:10.582 "dma_device_type": 1 00:17:10.582 }, 00:17:10.582 { 00:17:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.582 "dma_device_type": 2 00:17:10.582 } 00:17:10.582 ], 00:17:10.582 "driver_specific": {} 00:17:10.582 } 00:17:10.582 ] 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 [2024-11-06 12:47:59.128529] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.582 [2024-11-06 12:47:59.128611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.582 [2024-11-06 12:47:59.128662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.582 [2024-11-06 12:47:59.131715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.582 [2024-11-06 12:47:59.131814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.582 "name": "Existed_Raid", 00:17:10.582 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:10.582 "strip_size_kb": 64, 00:17:10.582 "state": "configuring", 00:17:10.582 "raid_level": "raid5f", 00:17:10.582 "superblock": true, 00:17:10.582 "num_base_bdevs": 4, 00:17:10.582 "num_base_bdevs_discovered": 3, 00:17:10.582 "num_base_bdevs_operational": 4, 00:17:10.582 "base_bdevs_list": [ 00:17:10.582 { 00:17:10.582 "name": "BaseBdev1", 00:17:10.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.582 "is_configured": false, 00:17:10.582 "data_offset": 0, 00:17:10.582 "data_size": 0 00:17:10.582 }, 00:17:10.582 { 00:17:10.582 "name": "BaseBdev2", 00:17:10.582 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:10.582 "is_configured": true, 00:17:10.582 "data_offset": 2048, 00:17:10.582 "data_size": 63488 00:17:10.582 }, 00:17:10.582 { 00:17:10.582 "name": "BaseBdev3", 00:17:10.582 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:10.582 "is_configured": true, 00:17:10.582 "data_offset": 2048, 00:17:10.582 "data_size": 63488 00:17:10.582 }, 00:17:10.582 { 00:17:10.582 "name": "BaseBdev4", 00:17:10.582 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:10.582 "is_configured": true, 00:17:10.582 "data_offset": 2048, 00:17:10.582 "data_size": 63488 00:17:10.582 } 00:17:10.582 ] 00:17:10.582 }' 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.582 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.152 [2024-11-06 12:47:59.684711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.152 "name": "Existed_Raid", 00:17:11.152 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:11.152 "strip_size_kb": 64, 00:17:11.152 "state": "configuring", 00:17:11.152 "raid_level": "raid5f", 00:17:11.152 "superblock": true, 00:17:11.152 "num_base_bdevs": 4, 00:17:11.152 "num_base_bdevs_discovered": 2, 00:17:11.152 "num_base_bdevs_operational": 4, 00:17:11.152 "base_bdevs_list": [ 00:17:11.152 { 00:17:11.152 "name": "BaseBdev1", 00:17:11.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.152 "is_configured": false, 00:17:11.152 "data_offset": 0, 00:17:11.152 "data_size": 0 00:17:11.152 }, 00:17:11.152 { 00:17:11.152 "name": null, 00:17:11.152 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:11.152 "is_configured": false, 00:17:11.152 "data_offset": 0, 00:17:11.152 "data_size": 63488 00:17:11.152 }, 00:17:11.152 { 00:17:11.152 "name": "BaseBdev3", 00:17:11.152 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:11.152 "is_configured": true, 00:17:11.152 "data_offset": 2048, 00:17:11.152 "data_size": 63488 00:17:11.152 }, 00:17:11.152 { 00:17:11.152 "name": "BaseBdev4", 00:17:11.152 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:11.152 "is_configured": true, 00:17:11.152 "data_offset": 2048, 00:17:11.152 "data_size": 63488 00:17:11.152 } 00:17:11.152 ] 00:17:11.152 }' 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.152 12:47:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 [2024-11-06 12:48:00.311680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.754 BaseBdev1 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 [ 00:17:11.754 { 00:17:11.754 "name": "BaseBdev1", 00:17:11.754 "aliases": [ 00:17:11.754 "0860eff8-afe5-4c52-b797-78f63050650d" 00:17:11.754 ], 00:17:11.754 "product_name": "Malloc disk", 00:17:11.754 "block_size": 512, 00:17:11.754 "num_blocks": 65536, 00:17:11.754 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:11.754 "assigned_rate_limits": { 00:17:11.754 "rw_ios_per_sec": 0, 00:17:11.754 "rw_mbytes_per_sec": 0, 00:17:11.754 "r_mbytes_per_sec": 0, 00:17:11.754 "w_mbytes_per_sec": 0 00:17:11.754 }, 00:17:11.754 "claimed": true, 00:17:11.754 "claim_type": "exclusive_write", 00:17:11.754 "zoned": false, 00:17:11.754 "supported_io_types": { 00:17:11.754 "read": true, 00:17:11.754 "write": true, 00:17:11.754 "unmap": true, 00:17:11.754 "flush": true, 00:17:11.754 "reset": true, 00:17:11.754 "nvme_admin": false, 00:17:11.754 "nvme_io": false, 00:17:11.754 "nvme_io_md": false, 00:17:11.754 "write_zeroes": true, 00:17:11.754 "zcopy": true, 00:17:11.754 "get_zone_info": false, 00:17:11.754 "zone_management": false, 00:17:11.754 "zone_append": false, 00:17:11.754 "compare": false, 00:17:11.754 "compare_and_write": false, 00:17:11.754 "abort": true, 00:17:11.754 "seek_hole": false, 00:17:11.754 "seek_data": false, 00:17:11.754 "copy": true, 00:17:11.754 "nvme_iov_md": false 00:17:11.754 }, 00:17:11.754 "memory_domains": [ 00:17:11.754 { 00:17:11.754 "dma_device_id": "system", 00:17:11.754 "dma_device_type": 1 00:17:11.754 }, 00:17:11.754 { 00:17:11.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.754 "dma_device_type": 2 00:17:11.754 } 00:17:11.754 ], 00:17:11.754 "driver_specific": {} 00:17:11.754 } 00:17:11.754 ] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.754 "name": "Existed_Raid", 00:17:11.754 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:11.754 "strip_size_kb": 64, 00:17:11.754 "state": "configuring", 00:17:11.754 "raid_level": "raid5f", 00:17:11.754 "superblock": true, 00:17:11.754 "num_base_bdevs": 4, 00:17:11.754 "num_base_bdevs_discovered": 3, 00:17:11.754 "num_base_bdevs_operational": 4, 00:17:11.754 "base_bdevs_list": [ 00:17:11.754 { 00:17:11.754 "name": "BaseBdev1", 00:17:11.754 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:11.754 "is_configured": true, 00:17:11.754 "data_offset": 2048, 00:17:11.754 "data_size": 63488 00:17:11.754 }, 00:17:11.754 { 00:17:11.754 "name": null, 00:17:11.754 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:11.754 "is_configured": false, 00:17:11.754 "data_offset": 0, 00:17:11.754 "data_size": 63488 00:17:11.754 }, 00:17:11.754 { 00:17:11.754 "name": "BaseBdev3", 00:17:11.754 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:11.754 "is_configured": true, 00:17:11.754 "data_offset": 2048, 00:17:11.754 "data_size": 63488 00:17:11.754 }, 00:17:11.754 { 00:17:11.754 "name": "BaseBdev4", 00:17:11.754 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:11.754 "is_configured": true, 00:17:11.754 "data_offset": 2048, 00:17:11.754 "data_size": 63488 00:17:11.754 } 00:17:11.754 ] 00:17:11.754 }' 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.754 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.321 [2024-11-06 12:48:00.927991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.321 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.579 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.579 "name": "Existed_Raid", 00:17:12.579 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:12.579 "strip_size_kb": 64, 00:17:12.579 "state": "configuring", 00:17:12.579 "raid_level": "raid5f", 00:17:12.579 "superblock": true, 00:17:12.579 "num_base_bdevs": 4, 00:17:12.579 "num_base_bdevs_discovered": 2, 00:17:12.579 "num_base_bdevs_operational": 4, 00:17:12.579 "base_bdevs_list": [ 00:17:12.579 { 00:17:12.579 "name": "BaseBdev1", 00:17:12.579 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:12.579 "is_configured": true, 00:17:12.579 "data_offset": 2048, 00:17:12.579 "data_size": 63488 00:17:12.579 }, 00:17:12.579 { 00:17:12.579 "name": null, 00:17:12.579 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:12.579 "is_configured": false, 00:17:12.579 "data_offset": 0, 00:17:12.579 "data_size": 63488 00:17:12.579 }, 00:17:12.579 { 00:17:12.579 "name": null, 00:17:12.579 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:12.579 "is_configured": false, 00:17:12.579 "data_offset": 0, 00:17:12.579 "data_size": 63488 00:17:12.579 }, 00:17:12.579 { 00:17:12.579 "name": "BaseBdev4", 00:17:12.579 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:12.579 "is_configured": true, 00:17:12.579 "data_offset": 2048, 00:17:12.579 "data_size": 63488 00:17:12.579 } 00:17:12.579 ] 00:17:12.579 }' 00:17:12.579 12:48:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.579 12:48:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.836 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.836 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.836 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.836 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.836 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.094 [2024-11-06 12:48:01.512129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.094 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.095 "name": "Existed_Raid", 00:17:13.095 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:13.095 "strip_size_kb": 64, 00:17:13.095 "state": "configuring", 00:17:13.095 "raid_level": "raid5f", 00:17:13.095 "superblock": true, 00:17:13.095 "num_base_bdevs": 4, 00:17:13.095 "num_base_bdevs_discovered": 3, 00:17:13.095 "num_base_bdevs_operational": 4, 00:17:13.095 "base_bdevs_list": [ 00:17:13.095 { 00:17:13.095 "name": "BaseBdev1", 00:17:13.095 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:13.095 "is_configured": true, 00:17:13.095 "data_offset": 2048, 00:17:13.095 "data_size": 63488 00:17:13.095 }, 00:17:13.095 { 00:17:13.095 "name": null, 00:17:13.095 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:13.095 "is_configured": false, 00:17:13.095 "data_offset": 0, 00:17:13.095 "data_size": 63488 00:17:13.095 }, 00:17:13.095 { 00:17:13.095 "name": "BaseBdev3", 00:17:13.095 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:13.095 "is_configured": true, 00:17:13.095 "data_offset": 2048, 00:17:13.095 "data_size": 63488 00:17:13.095 }, 00:17:13.095 { 00:17:13.095 "name": "BaseBdev4", 00:17:13.095 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:13.095 "is_configured": true, 00:17:13.095 "data_offset": 2048, 00:17:13.095 "data_size": 63488 00:17:13.095 } 00:17:13.095 ] 00:17:13.095 }' 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.095 12:48:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 [2024-11-06 12:48:02.124400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.662 "name": "Existed_Raid", 00:17:13.662 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:13.662 "strip_size_kb": 64, 00:17:13.662 "state": "configuring", 00:17:13.662 "raid_level": "raid5f", 00:17:13.662 "superblock": true, 00:17:13.662 "num_base_bdevs": 4, 00:17:13.662 "num_base_bdevs_discovered": 2, 00:17:13.662 "num_base_bdevs_operational": 4, 00:17:13.662 "base_bdevs_list": [ 00:17:13.662 { 00:17:13.662 "name": null, 00:17:13.662 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:13.662 "is_configured": false, 00:17:13.662 "data_offset": 0, 00:17:13.662 "data_size": 63488 00:17:13.662 }, 00:17:13.662 { 00:17:13.662 "name": null, 00:17:13.662 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:13.662 "is_configured": false, 00:17:13.662 "data_offset": 0, 00:17:13.662 "data_size": 63488 00:17:13.662 }, 00:17:13.662 { 00:17:13.662 "name": "BaseBdev3", 00:17:13.662 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:13.662 "is_configured": true, 00:17:13.662 "data_offset": 2048, 00:17:13.662 "data_size": 63488 00:17:13.662 }, 00:17:13.662 { 00:17:13.662 "name": "BaseBdev4", 00:17:13.662 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:13.662 "is_configured": true, 00:17:13.662 "data_offset": 2048, 00:17:13.662 "data_size": 63488 00:17:13.662 } 00:17:13.662 ] 00:17:13.662 }' 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.662 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.229 [2024-11-06 12:48:02.801387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.229 "name": "Existed_Raid", 00:17:14.229 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:14.229 "strip_size_kb": 64, 00:17:14.229 "state": "configuring", 00:17:14.229 "raid_level": "raid5f", 00:17:14.229 "superblock": true, 00:17:14.229 "num_base_bdevs": 4, 00:17:14.229 "num_base_bdevs_discovered": 3, 00:17:14.229 "num_base_bdevs_operational": 4, 00:17:14.229 "base_bdevs_list": [ 00:17:14.229 { 00:17:14.229 "name": null, 00:17:14.229 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:14.229 "is_configured": false, 00:17:14.229 "data_offset": 0, 00:17:14.229 "data_size": 63488 00:17:14.229 }, 00:17:14.229 { 00:17:14.229 "name": "BaseBdev2", 00:17:14.229 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:14.229 "is_configured": true, 00:17:14.229 "data_offset": 2048, 00:17:14.229 "data_size": 63488 00:17:14.229 }, 00:17:14.229 { 00:17:14.229 "name": "BaseBdev3", 00:17:14.229 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:14.229 "is_configured": true, 00:17:14.229 "data_offset": 2048, 00:17:14.229 "data_size": 63488 00:17:14.229 }, 00:17:14.229 { 00:17:14.229 "name": "BaseBdev4", 00:17:14.229 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:14.229 "is_configured": true, 00:17:14.229 "data_offset": 2048, 00:17:14.229 "data_size": 63488 00:17:14.229 } 00:17:14.229 ] 00:17:14.229 }' 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.229 12:48:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0860eff8-afe5-4c52-b797-78f63050650d 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.795 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.053 [2024-11-06 12:48:03.472641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:15.053 [2024-11-06 12:48:03.472987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:15.053 [2024-11-06 12:48:03.473006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:15.053 NewBaseBdev 00:17:15.053 [2024-11-06 12:48:03.473387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.053 [2024-11-06 12:48:03.480127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:15.053 [2024-11-06 12:48:03.480190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:15.053 [2024-11-06 12:48:03.480568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.053 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.053 [ 00:17:15.053 { 00:17:15.053 "name": "NewBaseBdev", 00:17:15.053 "aliases": [ 00:17:15.053 "0860eff8-afe5-4c52-b797-78f63050650d" 00:17:15.053 ], 00:17:15.054 "product_name": "Malloc disk", 00:17:15.054 "block_size": 512, 00:17:15.054 "num_blocks": 65536, 00:17:15.054 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:15.054 "assigned_rate_limits": { 00:17:15.054 "rw_ios_per_sec": 0, 00:17:15.054 "rw_mbytes_per_sec": 0, 00:17:15.054 "r_mbytes_per_sec": 0, 00:17:15.054 "w_mbytes_per_sec": 0 00:17:15.054 }, 00:17:15.054 "claimed": true, 00:17:15.054 "claim_type": "exclusive_write", 00:17:15.054 "zoned": false, 00:17:15.054 "supported_io_types": { 00:17:15.054 "read": true, 00:17:15.054 "write": true, 00:17:15.054 "unmap": true, 00:17:15.054 "flush": true, 00:17:15.054 "reset": true, 00:17:15.054 "nvme_admin": false, 00:17:15.054 "nvme_io": false, 00:17:15.054 "nvme_io_md": false, 00:17:15.054 "write_zeroes": true, 00:17:15.054 "zcopy": true, 00:17:15.054 "get_zone_info": false, 00:17:15.054 "zone_management": false, 00:17:15.054 "zone_append": false, 00:17:15.054 "compare": false, 00:17:15.054 "compare_and_write": false, 00:17:15.054 "abort": true, 00:17:15.054 "seek_hole": false, 00:17:15.054 "seek_data": false, 00:17:15.054 "copy": true, 00:17:15.054 "nvme_iov_md": false 00:17:15.054 }, 00:17:15.054 "memory_domains": [ 00:17:15.054 { 00:17:15.054 "dma_device_id": "system", 00:17:15.054 "dma_device_type": 1 00:17:15.054 }, 00:17:15.054 { 00:17:15.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.054 "dma_device_type": 2 00:17:15.054 } 00:17:15.054 ], 00:17:15.054 "driver_specific": {} 00:17:15.054 } 00:17:15.054 ] 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.054 "name": "Existed_Raid", 00:17:15.054 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:15.054 "strip_size_kb": 64, 00:17:15.054 "state": "online", 00:17:15.054 "raid_level": "raid5f", 00:17:15.054 "superblock": true, 00:17:15.054 "num_base_bdevs": 4, 00:17:15.054 "num_base_bdevs_discovered": 4, 00:17:15.054 "num_base_bdevs_operational": 4, 00:17:15.054 "base_bdevs_list": [ 00:17:15.054 { 00:17:15.054 "name": "NewBaseBdev", 00:17:15.054 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:15.054 "is_configured": true, 00:17:15.054 "data_offset": 2048, 00:17:15.054 "data_size": 63488 00:17:15.054 }, 00:17:15.054 { 00:17:15.054 "name": "BaseBdev2", 00:17:15.054 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:15.054 "is_configured": true, 00:17:15.054 "data_offset": 2048, 00:17:15.054 "data_size": 63488 00:17:15.054 }, 00:17:15.054 { 00:17:15.054 "name": "BaseBdev3", 00:17:15.054 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:15.054 "is_configured": true, 00:17:15.054 "data_offset": 2048, 00:17:15.054 "data_size": 63488 00:17:15.054 }, 00:17:15.054 { 00:17:15.054 "name": "BaseBdev4", 00:17:15.054 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:15.054 "is_configured": true, 00:17:15.054 "data_offset": 2048, 00:17:15.054 "data_size": 63488 00:17:15.054 } 00:17:15.054 ] 00:17:15.054 }' 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.054 12:48:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.621 [2024-11-06 12:48:04.061866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.621 "name": "Existed_Raid", 00:17:15.621 "aliases": [ 00:17:15.621 "e810f74e-7e56-4c90-8cde-748d4b41220c" 00:17:15.621 ], 00:17:15.621 "product_name": "Raid Volume", 00:17:15.621 "block_size": 512, 00:17:15.621 "num_blocks": 190464, 00:17:15.621 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:15.621 "assigned_rate_limits": { 00:17:15.621 "rw_ios_per_sec": 0, 00:17:15.621 "rw_mbytes_per_sec": 0, 00:17:15.621 "r_mbytes_per_sec": 0, 00:17:15.621 "w_mbytes_per_sec": 0 00:17:15.621 }, 00:17:15.621 "claimed": false, 00:17:15.621 "zoned": false, 00:17:15.621 "supported_io_types": { 00:17:15.621 "read": true, 00:17:15.621 "write": true, 00:17:15.621 "unmap": false, 00:17:15.621 "flush": false, 00:17:15.621 "reset": true, 00:17:15.621 "nvme_admin": false, 00:17:15.621 "nvme_io": false, 00:17:15.621 "nvme_io_md": false, 00:17:15.621 "write_zeroes": true, 00:17:15.621 "zcopy": false, 00:17:15.621 "get_zone_info": false, 00:17:15.621 "zone_management": false, 00:17:15.621 "zone_append": false, 00:17:15.621 "compare": false, 00:17:15.621 "compare_and_write": false, 00:17:15.621 "abort": false, 00:17:15.621 "seek_hole": false, 00:17:15.621 "seek_data": false, 00:17:15.621 "copy": false, 00:17:15.621 "nvme_iov_md": false 00:17:15.621 }, 00:17:15.621 "driver_specific": { 00:17:15.621 "raid": { 00:17:15.621 "uuid": "e810f74e-7e56-4c90-8cde-748d4b41220c", 00:17:15.621 "strip_size_kb": 64, 00:17:15.621 "state": "online", 00:17:15.621 "raid_level": "raid5f", 00:17:15.621 "superblock": true, 00:17:15.621 "num_base_bdevs": 4, 00:17:15.621 "num_base_bdevs_discovered": 4, 00:17:15.621 "num_base_bdevs_operational": 4, 00:17:15.621 "base_bdevs_list": [ 00:17:15.621 { 00:17:15.621 "name": "NewBaseBdev", 00:17:15.621 "uuid": "0860eff8-afe5-4c52-b797-78f63050650d", 00:17:15.621 "is_configured": true, 00:17:15.621 "data_offset": 2048, 00:17:15.621 "data_size": 63488 00:17:15.621 }, 00:17:15.621 { 00:17:15.621 "name": "BaseBdev2", 00:17:15.621 "uuid": "de53ac72-5ad6-4db5-9126-5371f4395a60", 00:17:15.621 "is_configured": true, 00:17:15.621 "data_offset": 2048, 00:17:15.621 "data_size": 63488 00:17:15.621 }, 00:17:15.621 { 00:17:15.621 "name": "BaseBdev3", 00:17:15.621 "uuid": "9da083db-7909-440e-811a-4d09d3b17b31", 00:17:15.621 "is_configured": true, 00:17:15.621 "data_offset": 2048, 00:17:15.621 "data_size": 63488 00:17:15.621 }, 00:17:15.621 { 00:17:15.621 "name": "BaseBdev4", 00:17:15.621 "uuid": "5583fc2e-ceff-4e9e-bd69-1b8a998ded81", 00:17:15.621 "is_configured": true, 00:17:15.621 "data_offset": 2048, 00:17:15.621 "data_size": 63488 00:17:15.621 } 00:17:15.621 ] 00:17:15.621 } 00:17:15.621 } 00:17:15.621 }' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:15.621 BaseBdev2 00:17:15.621 BaseBdev3 00:17:15.621 BaseBdev4' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.621 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.880 [2024-11-06 12:48:04.457569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.880 [2024-11-06 12:48:04.457609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.880 [2024-11-06 12:48:04.457721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.880 [2024-11-06 12:48:04.458120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.880 [2024-11-06 12:48:04.458139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83926 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83926 ']' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83926 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83926 00:17:15.880 killing process with pid 83926 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83926' 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83926 00:17:15.880 [2024-11-06 12:48:04.502372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.880 12:48:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83926 00:17:16.447 [2024-11-06 12:48:04.900444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.823 12:48:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.823 00:17:17.823 real 0m13.401s 00:17:17.823 user 0m21.978s 00:17:17.823 sys 0m1.935s 00:17:17.823 ************************************ 00:17:17.823 END TEST raid5f_state_function_test_sb 00:17:17.823 ************************************ 00:17:17.823 12:48:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:17.823 12:48:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.823 12:48:06 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:17.823 12:48:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:17.823 12:48:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:17.823 12:48:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.823 ************************************ 00:17:17.823 START TEST raid5f_superblock_test 00:17:17.823 ************************************ 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84614 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84614 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84614 ']' 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:17.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:17.823 12:48:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.823 [2024-11-06 12:48:06.257781] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:17:17.823 [2024-11-06 12:48:06.257972] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84614 ] 00:17:17.823 [2024-11-06 12:48:06.452366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.082 [2024-11-06 12:48:06.611677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.340 [2024-11-06 12:48:06.845501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.340 [2024-11-06 12:48:06.845590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.906 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 malloc1 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 [2024-11-06 12:48:07.384848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.907 [2024-11-06 12:48:07.385097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.907 [2024-11-06 12:48:07.385269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:18.907 [2024-11-06 12:48:07.385399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.907 [2024-11-06 12:48:07.388394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.907 [2024-11-06 12:48:07.388570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.907 pt1 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 malloc2 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 [2024-11-06 12:48:07.438275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.907 [2024-11-06 12:48:07.438348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.907 [2024-11-06 12:48:07.438383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:18.907 [2024-11-06 12:48:07.438397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.907 [2024-11-06 12:48:07.441229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.907 [2024-11-06 12:48:07.441274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.907 pt2 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 malloc3 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 [2024-11-06 12:48:07.502215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.907 [2024-11-06 12:48:07.502478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.907 [2024-11-06 12:48:07.502526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:18.907 [2024-11-06 12:48:07.502544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.907 [2024-11-06 12:48:07.505500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.907 [2024-11-06 12:48:07.505655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.907 pt3 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 malloc4 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.907 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.907 [2024-11-06 12:48:07.560066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:18.907 [2024-11-06 12:48:07.560139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.907 [2024-11-06 12:48:07.560169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:18.907 [2024-11-06 12:48:07.560183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.165 [2024-11-06 12:48:07.562942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.165 [2024-11-06 12:48:07.563106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:19.165 pt4 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.165 [2024-11-06 12:48:07.568137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.165 [2024-11-06 12:48:07.571018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.165 [2024-11-06 12:48:07.571280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.165 [2024-11-06 12:48:07.571573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:19.165 [2024-11-06 12:48:07.572007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:19.165 [2024-11-06 12:48:07.572159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:19.165 [2024-11-06 12:48:07.572684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:19.165 [2024-11-06 12:48:07.579970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:19.165 [2024-11-06 12:48:07.580124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:19.165 [2024-11-06 12:48:07.580527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.165 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.166 "name": "raid_bdev1", 00:17:19.166 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:19.166 "strip_size_kb": 64, 00:17:19.166 "state": "online", 00:17:19.166 "raid_level": "raid5f", 00:17:19.166 "superblock": true, 00:17:19.166 "num_base_bdevs": 4, 00:17:19.166 "num_base_bdevs_discovered": 4, 00:17:19.166 "num_base_bdevs_operational": 4, 00:17:19.166 "base_bdevs_list": [ 00:17:19.166 { 00:17:19.166 "name": "pt1", 00:17:19.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.166 "is_configured": true, 00:17:19.166 "data_offset": 2048, 00:17:19.166 "data_size": 63488 00:17:19.166 }, 00:17:19.166 { 00:17:19.166 "name": "pt2", 00:17:19.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.166 "is_configured": true, 00:17:19.166 "data_offset": 2048, 00:17:19.166 "data_size": 63488 00:17:19.166 }, 00:17:19.166 { 00:17:19.166 "name": "pt3", 00:17:19.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.166 "is_configured": true, 00:17:19.166 "data_offset": 2048, 00:17:19.166 "data_size": 63488 00:17:19.166 }, 00:17:19.166 { 00:17:19.166 "name": "pt4", 00:17:19.166 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.166 "is_configured": true, 00:17:19.166 "data_offset": 2048, 00:17:19.166 "data_size": 63488 00:17:19.166 } 00:17:19.166 ] 00:17:19.166 }' 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.166 12:48:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.733 [2024-11-06 12:48:08.136474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.733 "name": "raid_bdev1", 00:17:19.733 "aliases": [ 00:17:19.733 "b3d89756-e304-4eb3-b113-5d4ecdb5da07" 00:17:19.733 ], 00:17:19.733 "product_name": "Raid Volume", 00:17:19.733 "block_size": 512, 00:17:19.733 "num_blocks": 190464, 00:17:19.733 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:19.733 "assigned_rate_limits": { 00:17:19.733 "rw_ios_per_sec": 0, 00:17:19.733 "rw_mbytes_per_sec": 0, 00:17:19.733 "r_mbytes_per_sec": 0, 00:17:19.733 "w_mbytes_per_sec": 0 00:17:19.733 }, 00:17:19.733 "claimed": false, 00:17:19.733 "zoned": false, 00:17:19.733 "supported_io_types": { 00:17:19.733 "read": true, 00:17:19.733 "write": true, 00:17:19.733 "unmap": false, 00:17:19.733 "flush": false, 00:17:19.733 "reset": true, 00:17:19.733 "nvme_admin": false, 00:17:19.733 "nvme_io": false, 00:17:19.733 "nvme_io_md": false, 00:17:19.733 "write_zeroes": true, 00:17:19.733 "zcopy": false, 00:17:19.733 "get_zone_info": false, 00:17:19.733 "zone_management": false, 00:17:19.733 "zone_append": false, 00:17:19.733 "compare": false, 00:17:19.733 "compare_and_write": false, 00:17:19.733 "abort": false, 00:17:19.733 "seek_hole": false, 00:17:19.733 "seek_data": false, 00:17:19.733 "copy": false, 00:17:19.733 "nvme_iov_md": false 00:17:19.733 }, 00:17:19.733 "driver_specific": { 00:17:19.733 "raid": { 00:17:19.733 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:19.733 "strip_size_kb": 64, 00:17:19.733 "state": "online", 00:17:19.733 "raid_level": "raid5f", 00:17:19.733 "superblock": true, 00:17:19.733 "num_base_bdevs": 4, 00:17:19.733 "num_base_bdevs_discovered": 4, 00:17:19.733 "num_base_bdevs_operational": 4, 00:17:19.733 "base_bdevs_list": [ 00:17:19.733 { 00:17:19.733 "name": "pt1", 00:17:19.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.733 "is_configured": true, 00:17:19.733 "data_offset": 2048, 00:17:19.733 "data_size": 63488 00:17:19.733 }, 00:17:19.733 { 00:17:19.733 "name": "pt2", 00:17:19.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.733 "is_configured": true, 00:17:19.733 "data_offset": 2048, 00:17:19.733 "data_size": 63488 00:17:19.733 }, 00:17:19.733 { 00:17:19.733 "name": "pt3", 00:17:19.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.733 "is_configured": true, 00:17:19.733 "data_offset": 2048, 00:17:19.733 "data_size": 63488 00:17:19.733 }, 00:17:19.733 { 00:17:19.733 "name": "pt4", 00:17:19.733 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.733 "is_configured": true, 00:17:19.733 "data_offset": 2048, 00:17:19.733 "data_size": 63488 00:17:19.733 } 00:17:19.733 ] 00:17:19.733 } 00:17:19.733 } 00:17:19.733 }' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.733 pt2 00:17:19.733 pt3 00:17:19.733 pt4' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.733 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 [2024-11-06 12:48:08.504584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3d89756-e304-4eb3-b113-5d4ecdb5da07 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3d89756-e304-4eb3-b113-5d4ecdb5da07 ']' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 [2024-11-06 12:48:08.552366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.992 [2024-11-06 12:48:08.552402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.992 [2024-11-06 12:48:08.552523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.992 [2024-11-06 12:48:08.552636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.992 [2024-11-06 12:48:08.552660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.993 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.251 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.252 [2024-11-06 12:48:08.728457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:20.252 [2024-11-06 12:48:08.731045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:20.252 [2024-11-06 12:48:08.731244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:20.252 [2024-11-06 12:48:08.731453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:20.252 [2024-11-06 12:48:08.731644] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:20.252 [2024-11-06 12:48:08.731846] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:20.252 [2024-11-06 12:48:08.732035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:20.252 [2024-11-06 12:48:08.732277] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:20.252 [2024-11-06 12:48:08.732500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.252 [2024-11-06 12:48:08.732701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:20.252 request: 00:17:20.252 { 00:17:20.252 "name": "raid_bdev1", 00:17:20.252 "raid_level": "raid5f", 00:17:20.252 "base_bdevs": [ 00:17:20.252 "malloc1", 00:17:20.252 "malloc2", 00:17:20.252 "malloc3", 00:17:20.252 "malloc4" 00:17:20.252 ], 00:17:20.252 "strip_size_kb": 64, 00:17:20.252 "superblock": false, 00:17:20.252 "method": "bdev_raid_create", 00:17:20.252 "req_id": 1 00:17:20.252 } 00:17:20.252 Got JSON-RPC error response 00:17:20.252 response: 00:17:20.252 { 00:17:20.252 "code": -17, 00:17:20.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:20.252 } 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.252 [2024-11-06 12:48:08.801038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.252 [2024-11-06 12:48:08.801130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.252 [2024-11-06 12:48:08.801158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:20.252 [2024-11-06 12:48:08.801175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.252 [2024-11-06 12:48:08.804081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.252 [2024-11-06 12:48:08.804135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.252 [2024-11-06 12:48:08.804266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.252 [2024-11-06 12:48:08.804347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.252 pt1 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.252 "name": "raid_bdev1", 00:17:20.252 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:20.252 "strip_size_kb": 64, 00:17:20.252 "state": "configuring", 00:17:20.252 "raid_level": "raid5f", 00:17:20.252 "superblock": true, 00:17:20.252 "num_base_bdevs": 4, 00:17:20.252 "num_base_bdevs_discovered": 1, 00:17:20.252 "num_base_bdevs_operational": 4, 00:17:20.252 "base_bdevs_list": [ 00:17:20.252 { 00:17:20.252 "name": "pt1", 00:17:20.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.252 "is_configured": true, 00:17:20.252 "data_offset": 2048, 00:17:20.252 "data_size": 63488 00:17:20.252 }, 00:17:20.252 { 00:17:20.252 "name": null, 00:17:20.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.252 "is_configured": false, 00:17:20.252 "data_offset": 2048, 00:17:20.252 "data_size": 63488 00:17:20.252 }, 00:17:20.252 { 00:17:20.252 "name": null, 00:17:20.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.252 "is_configured": false, 00:17:20.252 "data_offset": 2048, 00:17:20.252 "data_size": 63488 00:17:20.252 }, 00:17:20.252 { 00:17:20.252 "name": null, 00:17:20.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.252 "is_configured": false, 00:17:20.252 "data_offset": 2048, 00:17:20.252 "data_size": 63488 00:17:20.252 } 00:17:20.252 ] 00:17:20.252 }' 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.252 12:48:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 [2024-11-06 12:48:09.333210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.819 [2024-11-06 12:48:09.333315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.819 [2024-11-06 12:48:09.333345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:20.819 [2024-11-06 12:48:09.333374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.819 [2024-11-06 12:48:09.333930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.819 [2024-11-06 12:48:09.333974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.819 [2024-11-06 12:48:09.334136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.819 [2024-11-06 12:48:09.334175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.819 pt2 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 [2024-11-06 12:48:09.341185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.819 "name": "raid_bdev1", 00:17:20.819 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:20.819 "strip_size_kb": 64, 00:17:20.819 "state": "configuring", 00:17:20.819 "raid_level": "raid5f", 00:17:20.819 "superblock": true, 00:17:20.819 "num_base_bdevs": 4, 00:17:20.819 "num_base_bdevs_discovered": 1, 00:17:20.819 "num_base_bdevs_operational": 4, 00:17:20.819 "base_bdevs_list": [ 00:17:20.819 { 00:17:20.819 "name": "pt1", 00:17:20.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.819 "is_configured": true, 00:17:20.819 "data_offset": 2048, 00:17:20.819 "data_size": 63488 00:17:20.819 }, 00:17:20.819 { 00:17:20.819 "name": null, 00:17:20.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.819 "is_configured": false, 00:17:20.819 "data_offset": 0, 00:17:20.819 "data_size": 63488 00:17:20.819 }, 00:17:20.819 { 00:17:20.819 "name": null, 00:17:20.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.819 "is_configured": false, 00:17:20.819 "data_offset": 2048, 00:17:20.819 "data_size": 63488 00:17:20.819 }, 00:17:20.819 { 00:17:20.819 "name": null, 00:17:20.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.819 "is_configured": false, 00:17:20.819 "data_offset": 2048, 00:17:20.819 "data_size": 63488 00:17:20.819 } 00:17:20.819 ] 00:17:20.819 }' 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.819 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.401 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:21.401 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.401 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.401 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.401 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.401 [2024-11-06 12:48:09.869404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.402 [2024-11-06 12:48:09.869493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.402 [2024-11-06 12:48:09.869525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:21.402 [2024-11-06 12:48:09.869548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.402 [2024-11-06 12:48:09.870267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.402 [2024-11-06 12:48:09.870292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.402 [2024-11-06 12:48:09.870415] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.402 [2024-11-06 12:48:09.870464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.402 pt2 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.402 [2024-11-06 12:48:09.877329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:21.402 [2024-11-06 12:48:09.877393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.402 [2024-11-06 12:48:09.877420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:21.402 [2024-11-06 12:48:09.877433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.402 [2024-11-06 12:48:09.877853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.402 [2024-11-06 12:48:09.877886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:21.402 [2024-11-06 12:48:09.877966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:21.402 [2024-11-06 12:48:09.877994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.402 pt3 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.402 [2024-11-06 12:48:09.885297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:21.402 [2024-11-06 12:48:09.885355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.402 [2024-11-06 12:48:09.885391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:21.402 [2024-11-06 12:48:09.885403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.402 [2024-11-06 12:48:09.885852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.402 [2024-11-06 12:48:09.885883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:21.402 [2024-11-06 12:48:09.885964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:21.402 [2024-11-06 12:48:09.885991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:21.402 [2024-11-06 12:48:09.886161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:21.402 [2024-11-06 12:48:09.886185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:21.402 [2024-11-06 12:48:09.886510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:21.402 [2024-11-06 12:48:09.893248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:21.402 [2024-11-06 12:48:09.893278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:21.402 [2024-11-06 12:48:09.893514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.402 pt4 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.402 "name": "raid_bdev1", 00:17:21.402 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:21.402 "strip_size_kb": 64, 00:17:21.402 "state": "online", 00:17:21.402 "raid_level": "raid5f", 00:17:21.402 "superblock": true, 00:17:21.402 "num_base_bdevs": 4, 00:17:21.402 "num_base_bdevs_discovered": 4, 00:17:21.402 "num_base_bdevs_operational": 4, 00:17:21.402 "base_bdevs_list": [ 00:17:21.402 { 00:17:21.402 "name": "pt1", 00:17:21.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.402 "is_configured": true, 00:17:21.402 "data_offset": 2048, 00:17:21.402 "data_size": 63488 00:17:21.402 }, 00:17:21.402 { 00:17:21.402 "name": "pt2", 00:17:21.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.402 "is_configured": true, 00:17:21.402 "data_offset": 2048, 00:17:21.402 "data_size": 63488 00:17:21.402 }, 00:17:21.402 { 00:17:21.402 "name": "pt3", 00:17:21.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.402 "is_configured": true, 00:17:21.402 "data_offset": 2048, 00:17:21.402 "data_size": 63488 00:17:21.402 }, 00:17:21.402 { 00:17:21.402 "name": "pt4", 00:17:21.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.402 "is_configured": true, 00:17:21.402 "data_offset": 2048, 00:17:21.402 "data_size": 63488 00:17:21.402 } 00:17:21.402 ] 00:17:21.402 }' 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.402 12:48:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.979 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:21.979 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.980 [2024-11-06 12:48:10.465923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.980 "name": "raid_bdev1", 00:17:21.980 "aliases": [ 00:17:21.980 "b3d89756-e304-4eb3-b113-5d4ecdb5da07" 00:17:21.980 ], 00:17:21.980 "product_name": "Raid Volume", 00:17:21.980 "block_size": 512, 00:17:21.980 "num_blocks": 190464, 00:17:21.980 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:21.980 "assigned_rate_limits": { 00:17:21.980 "rw_ios_per_sec": 0, 00:17:21.980 "rw_mbytes_per_sec": 0, 00:17:21.980 "r_mbytes_per_sec": 0, 00:17:21.980 "w_mbytes_per_sec": 0 00:17:21.980 }, 00:17:21.980 "claimed": false, 00:17:21.980 "zoned": false, 00:17:21.980 "supported_io_types": { 00:17:21.980 "read": true, 00:17:21.980 "write": true, 00:17:21.980 "unmap": false, 00:17:21.980 "flush": false, 00:17:21.980 "reset": true, 00:17:21.980 "nvme_admin": false, 00:17:21.980 "nvme_io": false, 00:17:21.980 "nvme_io_md": false, 00:17:21.980 "write_zeroes": true, 00:17:21.980 "zcopy": false, 00:17:21.980 "get_zone_info": false, 00:17:21.980 "zone_management": false, 00:17:21.980 "zone_append": false, 00:17:21.980 "compare": false, 00:17:21.980 "compare_and_write": false, 00:17:21.980 "abort": false, 00:17:21.980 "seek_hole": false, 00:17:21.980 "seek_data": false, 00:17:21.980 "copy": false, 00:17:21.980 "nvme_iov_md": false 00:17:21.980 }, 00:17:21.980 "driver_specific": { 00:17:21.980 "raid": { 00:17:21.980 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:21.980 "strip_size_kb": 64, 00:17:21.980 "state": "online", 00:17:21.980 "raid_level": "raid5f", 00:17:21.980 "superblock": true, 00:17:21.980 "num_base_bdevs": 4, 00:17:21.980 "num_base_bdevs_discovered": 4, 00:17:21.980 "num_base_bdevs_operational": 4, 00:17:21.980 "base_bdevs_list": [ 00:17:21.980 { 00:17:21.980 "name": "pt1", 00:17:21.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.980 "is_configured": true, 00:17:21.980 "data_offset": 2048, 00:17:21.980 "data_size": 63488 00:17:21.980 }, 00:17:21.980 { 00:17:21.980 "name": "pt2", 00:17:21.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.980 "is_configured": true, 00:17:21.980 "data_offset": 2048, 00:17:21.980 "data_size": 63488 00:17:21.980 }, 00:17:21.980 { 00:17:21.980 "name": "pt3", 00:17:21.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.980 "is_configured": true, 00:17:21.980 "data_offset": 2048, 00:17:21.980 "data_size": 63488 00:17:21.980 }, 00:17:21.980 { 00:17:21.980 "name": "pt4", 00:17:21.980 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.980 "is_configured": true, 00:17:21.980 "data_offset": 2048, 00:17:21.980 "data_size": 63488 00:17:21.980 } 00:17:21.980 ] 00:17:21.980 } 00:17:21.980 } 00:17:21.980 }' 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:21.980 pt2 00:17:21.980 pt3 00:17:21.980 pt4' 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.980 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.239 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 [2024-11-06 12:48:10.841935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3d89756-e304-4eb3-b113-5d4ecdb5da07 '!=' b3d89756-e304-4eb3-b113-5d4ecdb5da07 ']' 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.240 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.498 [2024-11-06 12:48:10.893882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.498 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.498 "name": "raid_bdev1", 00:17:22.498 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:22.498 "strip_size_kb": 64, 00:17:22.498 "state": "online", 00:17:22.498 "raid_level": "raid5f", 00:17:22.498 "superblock": true, 00:17:22.498 "num_base_bdevs": 4, 00:17:22.498 "num_base_bdevs_discovered": 3, 00:17:22.498 "num_base_bdevs_operational": 3, 00:17:22.498 "base_bdevs_list": [ 00:17:22.498 { 00:17:22.498 "name": null, 00:17:22.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.498 "is_configured": false, 00:17:22.498 "data_offset": 0, 00:17:22.498 "data_size": 63488 00:17:22.498 }, 00:17:22.498 { 00:17:22.498 "name": "pt2", 00:17:22.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.498 "is_configured": true, 00:17:22.498 "data_offset": 2048, 00:17:22.498 "data_size": 63488 00:17:22.498 }, 00:17:22.498 { 00:17:22.498 "name": "pt3", 00:17:22.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.498 "is_configured": true, 00:17:22.498 "data_offset": 2048, 00:17:22.499 "data_size": 63488 00:17:22.499 }, 00:17:22.499 { 00:17:22.499 "name": "pt4", 00:17:22.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:22.499 "is_configured": true, 00:17:22.499 "data_offset": 2048, 00:17:22.499 "data_size": 63488 00:17:22.499 } 00:17:22.499 ] 00:17:22.499 }' 00:17:22.499 12:48:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.499 12:48:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.063 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.063 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 [2024-11-06 12:48:11.481977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.064 [2024-11-06 12:48:11.482017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.064 [2024-11-06 12:48:11.482115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.064 [2024-11-06 12:48:11.482247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.064 [2024-11-06 12:48:11.482265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 [2024-11-06 12:48:11.569983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.064 [2024-11-06 12:48:11.570050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.064 [2024-11-06 12:48:11.570080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:23.064 [2024-11-06 12:48:11.570094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.064 [2024-11-06 12:48:11.572880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.064 [2024-11-06 12:48:11.573053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.064 [2024-11-06 12:48:11.573176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:23.064 [2024-11-06 12:48:11.573257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.064 pt2 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.064 "name": "raid_bdev1", 00:17:23.064 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:23.064 "strip_size_kb": 64, 00:17:23.064 "state": "configuring", 00:17:23.064 "raid_level": "raid5f", 00:17:23.064 "superblock": true, 00:17:23.064 "num_base_bdevs": 4, 00:17:23.064 "num_base_bdevs_discovered": 1, 00:17:23.064 "num_base_bdevs_operational": 3, 00:17:23.064 "base_bdevs_list": [ 00:17:23.064 { 00:17:23.064 "name": null, 00:17:23.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.064 "is_configured": false, 00:17:23.064 "data_offset": 2048, 00:17:23.064 "data_size": 63488 00:17:23.064 }, 00:17:23.064 { 00:17:23.064 "name": "pt2", 00:17:23.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.064 "is_configured": true, 00:17:23.064 "data_offset": 2048, 00:17:23.064 "data_size": 63488 00:17:23.064 }, 00:17:23.064 { 00:17:23.064 "name": null, 00:17:23.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.064 "is_configured": false, 00:17:23.064 "data_offset": 2048, 00:17:23.064 "data_size": 63488 00:17:23.064 }, 00:17:23.064 { 00:17:23.064 "name": null, 00:17:23.064 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:23.064 "is_configured": false, 00:17:23.064 "data_offset": 2048, 00:17:23.064 "data_size": 63488 00:17:23.064 } 00:17:23.064 ] 00:17:23.064 }' 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.064 12:48:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.629 [2024-11-06 12:48:12.098244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:23.629 [2024-11-06 12:48:12.098342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.629 [2024-11-06 12:48:12.098388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:23.629 [2024-11-06 12:48:12.098408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.629 [2024-11-06 12:48:12.099122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.629 [2024-11-06 12:48:12.099176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:23.629 [2024-11-06 12:48:12.099359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:23.629 [2024-11-06 12:48:12.099443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:23.629 pt3 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.629 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.630 "name": "raid_bdev1", 00:17:23.630 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:23.630 "strip_size_kb": 64, 00:17:23.630 "state": "configuring", 00:17:23.630 "raid_level": "raid5f", 00:17:23.630 "superblock": true, 00:17:23.630 "num_base_bdevs": 4, 00:17:23.630 "num_base_bdevs_discovered": 2, 00:17:23.630 "num_base_bdevs_operational": 3, 00:17:23.630 "base_bdevs_list": [ 00:17:23.630 { 00:17:23.630 "name": null, 00:17:23.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.630 "is_configured": false, 00:17:23.630 "data_offset": 2048, 00:17:23.630 "data_size": 63488 00:17:23.630 }, 00:17:23.630 { 00:17:23.630 "name": "pt2", 00:17:23.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.630 "is_configured": true, 00:17:23.630 "data_offset": 2048, 00:17:23.630 "data_size": 63488 00:17:23.630 }, 00:17:23.630 { 00:17:23.630 "name": "pt3", 00:17:23.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.630 "is_configured": true, 00:17:23.630 "data_offset": 2048, 00:17:23.630 "data_size": 63488 00:17:23.630 }, 00:17:23.630 { 00:17:23.630 "name": null, 00:17:23.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:23.630 "is_configured": false, 00:17:23.630 "data_offset": 2048, 00:17:23.630 "data_size": 63488 00:17:23.630 } 00:17:23.630 ] 00:17:23.630 }' 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.630 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.196 [2024-11-06 12:48:12.590402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:24.196 [2024-11-06 12:48:12.590683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.196 [2024-11-06 12:48:12.590745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:24.196 [2024-11-06 12:48:12.590765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.196 [2024-11-06 12:48:12.591523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.196 [2024-11-06 12:48:12.591553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:24.196 [2024-11-06 12:48:12.591732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:24.196 [2024-11-06 12:48:12.591768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:24.196 [2024-11-06 12:48:12.591935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:24.196 [2024-11-06 12:48:12.591952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:24.196 [2024-11-06 12:48:12.592287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:24.196 [2024-11-06 12:48:12.598690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:24.196 [2024-11-06 12:48:12.598727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:24.196 [2024-11-06 12:48:12.599081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.196 pt4 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.196 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.196 "name": "raid_bdev1", 00:17:24.196 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:24.196 "strip_size_kb": 64, 00:17:24.196 "state": "online", 00:17:24.196 "raid_level": "raid5f", 00:17:24.196 "superblock": true, 00:17:24.196 "num_base_bdevs": 4, 00:17:24.197 "num_base_bdevs_discovered": 3, 00:17:24.197 "num_base_bdevs_operational": 3, 00:17:24.197 "base_bdevs_list": [ 00:17:24.197 { 00:17:24.197 "name": null, 00:17:24.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.197 "is_configured": false, 00:17:24.197 "data_offset": 2048, 00:17:24.197 "data_size": 63488 00:17:24.197 }, 00:17:24.197 { 00:17:24.197 "name": "pt2", 00:17:24.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.197 "is_configured": true, 00:17:24.197 "data_offset": 2048, 00:17:24.197 "data_size": 63488 00:17:24.197 }, 00:17:24.197 { 00:17:24.197 "name": "pt3", 00:17:24.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.197 "is_configured": true, 00:17:24.197 "data_offset": 2048, 00:17:24.197 "data_size": 63488 00:17:24.197 }, 00:17:24.197 { 00:17:24.197 "name": "pt4", 00:17:24.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:24.197 "is_configured": true, 00:17:24.197 "data_offset": 2048, 00:17:24.197 "data_size": 63488 00:17:24.197 } 00:17:24.197 ] 00:17:24.197 }' 00:17:24.197 12:48:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.197 12:48:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.456 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.456 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.456 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.456 [2024-11-06 12:48:13.070593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.456 [2024-11-06 12:48:13.070632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.456 [2024-11-06 12:48:13.070742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.457 [2024-11-06 12:48:13.070911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.457 [2024-11-06 12:48:13.070935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:24.457 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.457 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.457 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:24.457 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.457 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.457 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.716 [2024-11-06 12:48:13.138605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.716 [2024-11-06 12:48:13.138686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.716 [2024-11-06 12:48:13.138725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:24.716 [2024-11-06 12:48:13.138746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.716 [2024-11-06 12:48:13.141744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.716 [2024-11-06 12:48:13.141800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.716 [2024-11-06 12:48:13.141947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:24.716 [2024-11-06 12:48:13.142023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.716 [2024-11-06 12:48:13.142204] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:24.716 [2024-11-06 12:48:13.142229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.716 [2024-11-06 12:48:13.142274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:24.716 [2024-11-06 12:48:13.142355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.716 [2024-11-06 12:48:13.142504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:24.716 pt1 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.716 "name": "raid_bdev1", 00:17:24.716 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:24.716 "strip_size_kb": 64, 00:17:24.716 "state": "configuring", 00:17:24.716 "raid_level": "raid5f", 00:17:24.716 "superblock": true, 00:17:24.716 "num_base_bdevs": 4, 00:17:24.716 "num_base_bdevs_discovered": 2, 00:17:24.716 "num_base_bdevs_operational": 3, 00:17:24.716 "base_bdevs_list": [ 00:17:24.716 { 00:17:24.716 "name": null, 00:17:24.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.716 "is_configured": false, 00:17:24.716 "data_offset": 2048, 00:17:24.716 "data_size": 63488 00:17:24.716 }, 00:17:24.716 { 00:17:24.716 "name": "pt2", 00:17:24.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.716 "is_configured": true, 00:17:24.716 "data_offset": 2048, 00:17:24.716 "data_size": 63488 00:17:24.716 }, 00:17:24.716 { 00:17:24.716 "name": "pt3", 00:17:24.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.716 "is_configured": true, 00:17:24.716 "data_offset": 2048, 00:17:24.716 "data_size": 63488 00:17:24.716 }, 00:17:24.716 { 00:17:24.716 "name": null, 00:17:24.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:24.716 "is_configured": false, 00:17:24.716 "data_offset": 2048, 00:17:24.716 "data_size": 63488 00:17:24.716 } 00:17:24.716 ] 00:17:24.716 }' 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.716 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.283 [2024-11-06 12:48:13.750988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:25.283 [2024-11-06 12:48:13.751070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.283 [2024-11-06 12:48:13.751112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:25.283 [2024-11-06 12:48:13.751130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.283 [2024-11-06 12:48:13.751894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.283 [2024-11-06 12:48:13.752111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:25.283 [2024-11-06 12:48:13.752329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:25.283 [2024-11-06 12:48:13.752370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:25.283 [2024-11-06 12:48:13.752570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:25.283 [2024-11-06 12:48:13.752620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:25.283 [2024-11-06 12:48:13.752993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:25.283 [2024-11-06 12:48:13.759869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:25.283 [2024-11-06 12:48:13.759907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:25.283 [2024-11-06 12:48:13.760351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.283 pt4 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.283 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.284 "name": "raid_bdev1", 00:17:25.284 "uuid": "b3d89756-e304-4eb3-b113-5d4ecdb5da07", 00:17:25.284 "strip_size_kb": 64, 00:17:25.284 "state": "online", 00:17:25.284 "raid_level": "raid5f", 00:17:25.284 "superblock": true, 00:17:25.284 "num_base_bdevs": 4, 00:17:25.284 "num_base_bdevs_discovered": 3, 00:17:25.284 "num_base_bdevs_operational": 3, 00:17:25.284 "base_bdevs_list": [ 00:17:25.284 { 00:17:25.284 "name": null, 00:17:25.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.284 "is_configured": false, 00:17:25.284 "data_offset": 2048, 00:17:25.284 "data_size": 63488 00:17:25.284 }, 00:17:25.284 { 00:17:25.284 "name": "pt2", 00:17:25.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.284 "is_configured": true, 00:17:25.284 "data_offset": 2048, 00:17:25.284 "data_size": 63488 00:17:25.284 }, 00:17:25.284 { 00:17:25.284 "name": "pt3", 00:17:25.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:25.284 "is_configured": true, 00:17:25.284 "data_offset": 2048, 00:17:25.284 "data_size": 63488 00:17:25.284 }, 00:17:25.284 { 00:17:25.284 "name": "pt4", 00:17:25.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:25.284 "is_configured": true, 00:17:25.284 "data_offset": 2048, 00:17:25.284 "data_size": 63488 00:17:25.284 } 00:17:25.284 ] 00:17:25.284 }' 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.284 12:48:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.851 [2024-11-06 12:48:14.344812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b3d89756-e304-4eb3-b113-5d4ecdb5da07 '!=' b3d89756-e304-4eb3-b113-5d4ecdb5da07 ']' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84614 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84614 ']' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84614 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84614 00:17:25.851 killing process with pid 84614 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84614' 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84614 00:17:25.851 [2024-11-06 12:48:14.434912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.851 12:48:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84614 00:17:25.851 [2024-11-06 12:48:14.435032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.851 [2024-11-06 12:48:14.435133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.851 [2024-11-06 12:48:14.435156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:26.418 [2024-11-06 12:48:14.787983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.353 12:48:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:27.353 00:17:27.353 real 0m9.695s 00:17:27.353 user 0m15.894s 00:17:27.353 sys 0m1.432s 00:17:27.353 12:48:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:27.353 12:48:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.353 ************************************ 00:17:27.353 END TEST raid5f_superblock_test 00:17:27.353 ************************************ 00:17:27.353 12:48:15 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:27.353 12:48:15 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:27.353 12:48:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:27.353 12:48:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:27.353 12:48:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.353 ************************************ 00:17:27.353 START TEST raid5f_rebuild_test 00:17:27.353 ************************************ 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.353 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85106 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85106 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85106 ']' 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:27.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:27.354 12:48:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.612 [2024-11-06 12:48:16.009770] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:17:27.612 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:27.612 Zero copy mechanism will not be used. 00:17:27.612 [2024-11-06 12:48:16.010127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85106 ] 00:17:27.612 [2024-11-06 12:48:16.191358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.872 [2024-11-06 12:48:16.349583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.130 [2024-11-06 12:48:16.583488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.130 [2024-11-06 12:48:16.583559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.706 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:28.706 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:17:28.706 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 BaseBdev1_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 [2024-11-06 12:48:17.110675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.707 [2024-11-06 12:48:17.110816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.707 [2024-11-06 12:48:17.110863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.707 [2024-11-06 12:48:17.110888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.707 [2024-11-06 12:48:17.114708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.707 [2024-11-06 12:48:17.114772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.707 BaseBdev1 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 BaseBdev2_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 [2024-11-06 12:48:17.179355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:28.707 [2024-11-06 12:48:17.179454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.707 [2024-11-06 12:48:17.179490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.707 [2024-11-06 12:48:17.179511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.707 [2024-11-06 12:48:17.182718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.707 [2024-11-06 12:48:17.182801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.707 BaseBdev2 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 BaseBdev3_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 [2024-11-06 12:48:17.249510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:28.707 [2024-11-06 12:48:17.249780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.707 [2024-11-06 12:48:17.249868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:28.707 [2024-11-06 12:48:17.250118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.707 [2024-11-06 12:48:17.253358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.707 [2024-11-06 12:48:17.253528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:28.707 BaseBdev3 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 BaseBdev4_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 [2024-11-06 12:48:17.310886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:28.707 [2024-11-06 12:48:17.310989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.707 [2024-11-06 12:48:17.311026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:28.707 [2024-11-06 12:48:17.311047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.707 [2024-11-06 12:48:17.314340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.707 [2024-11-06 12:48:17.314540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:28.707 BaseBdev4 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.707 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.971 spare_malloc 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.971 spare_delay 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.971 [2024-11-06 12:48:17.379992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.971 [2024-11-06 12:48:17.380099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.971 [2024-11-06 12:48:17.380149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:28.971 [2024-11-06 12:48:17.380168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.971 [2024-11-06 12:48:17.383325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.971 [2024-11-06 12:48:17.383405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.971 spare 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.971 [2024-11-06 12:48:17.388138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.971 [2024-11-06 12:48:17.390840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.971 [2024-11-06 12:48:17.390940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.971 [2024-11-06 12:48:17.391045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.971 [2024-11-06 12:48:17.391223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.971 [2024-11-06 12:48:17.391262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:28.971 [2024-11-06 12:48:17.391671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:28.971 [2024-11-06 12:48:17.398719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.971 [2024-11-06 12:48:17.398747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.971 [2024-11-06 12:48:17.399120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.971 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.971 "name": "raid_bdev1", 00:17:28.971 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:28.971 "strip_size_kb": 64, 00:17:28.971 "state": "online", 00:17:28.971 "raid_level": "raid5f", 00:17:28.971 "superblock": false, 00:17:28.971 "num_base_bdevs": 4, 00:17:28.971 "num_base_bdevs_discovered": 4, 00:17:28.971 "num_base_bdevs_operational": 4, 00:17:28.971 "base_bdevs_list": [ 00:17:28.971 { 00:17:28.971 "name": "BaseBdev1", 00:17:28.971 "uuid": "cd9e1a80-c557-5481-90ac-cc75520a0b81", 00:17:28.971 "is_configured": true, 00:17:28.971 "data_offset": 0, 00:17:28.971 "data_size": 65536 00:17:28.971 }, 00:17:28.971 { 00:17:28.971 "name": "BaseBdev2", 00:17:28.971 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:28.971 "is_configured": true, 00:17:28.971 "data_offset": 0, 00:17:28.971 "data_size": 65536 00:17:28.971 }, 00:17:28.971 { 00:17:28.971 "name": "BaseBdev3", 00:17:28.971 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:28.971 "is_configured": true, 00:17:28.971 "data_offset": 0, 00:17:28.971 "data_size": 65536 00:17:28.971 }, 00:17:28.971 { 00:17:28.971 "name": "BaseBdev4", 00:17:28.971 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:28.972 "is_configured": true, 00:17:28.972 "data_offset": 0, 00:17:28.972 "data_size": 65536 00:17:28.972 } 00:17:28.972 ] 00:17:28.972 }' 00:17:28.972 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.972 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 [2024-11-06 12:48:17.911660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 12:48:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.539 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:29.797 [2024-11-06 12:48:18.323558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:29.797 /dev/nbd0 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.797 1+0 records in 00:17:29.797 1+0 records out 00:17:29.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304178 s, 13.5 MB/s 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:29.797 12:48:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:30.364 512+0 records in 00:17:30.364 512+0 records out 00:17:30.364 100663296 bytes (101 MB, 96 MiB) copied, 0.609199 s, 165 MB/s 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.364 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:30.623 [2024-11-06 12:48:19.278316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.882 [2024-11-06 12:48:19.307949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.882 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.882 "name": "raid_bdev1", 00:17:30.882 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:30.882 "strip_size_kb": 64, 00:17:30.882 "state": "online", 00:17:30.882 "raid_level": "raid5f", 00:17:30.882 "superblock": false, 00:17:30.882 "num_base_bdevs": 4, 00:17:30.882 "num_base_bdevs_discovered": 3, 00:17:30.882 "num_base_bdevs_operational": 3, 00:17:30.882 "base_bdevs_list": [ 00:17:30.882 { 00:17:30.882 "name": null, 00:17:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.882 "is_configured": false, 00:17:30.882 "data_offset": 0, 00:17:30.882 "data_size": 65536 00:17:30.882 }, 00:17:30.882 { 00:17:30.883 "name": "BaseBdev2", 00:17:30.883 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:30.883 "is_configured": true, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 }, 00:17:30.883 { 00:17:30.883 "name": "BaseBdev3", 00:17:30.883 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:30.883 "is_configured": true, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 }, 00:17:30.883 { 00:17:30.883 "name": "BaseBdev4", 00:17:30.883 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:30.883 "is_configured": true, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 } 00:17:30.883 ] 00:17:30.883 }' 00:17:30.883 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.883 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.452 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:31.452 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.452 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.452 [2024-11-06 12:48:19.840078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.452 [2024-11-06 12:48:19.855171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:31.452 12:48:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.452 12:48:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:31.452 [2024-11-06 12:48:19.864562] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.386 "name": "raid_bdev1", 00:17:32.386 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:32.386 "strip_size_kb": 64, 00:17:32.386 "state": "online", 00:17:32.386 "raid_level": "raid5f", 00:17:32.386 "superblock": false, 00:17:32.386 "num_base_bdevs": 4, 00:17:32.386 "num_base_bdevs_discovered": 4, 00:17:32.386 "num_base_bdevs_operational": 4, 00:17:32.386 "process": { 00:17:32.386 "type": "rebuild", 00:17:32.386 "target": "spare", 00:17:32.386 "progress": { 00:17:32.386 "blocks": 17280, 00:17:32.386 "percent": 8 00:17:32.386 } 00:17:32.386 }, 00:17:32.386 "base_bdevs_list": [ 00:17:32.386 { 00:17:32.386 "name": "spare", 00:17:32.386 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:32.386 "is_configured": true, 00:17:32.386 "data_offset": 0, 00:17:32.386 "data_size": 65536 00:17:32.386 }, 00:17:32.386 { 00:17:32.386 "name": "BaseBdev2", 00:17:32.386 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:32.386 "is_configured": true, 00:17:32.386 "data_offset": 0, 00:17:32.386 "data_size": 65536 00:17:32.386 }, 00:17:32.386 { 00:17:32.386 "name": "BaseBdev3", 00:17:32.386 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:32.386 "is_configured": true, 00:17:32.386 "data_offset": 0, 00:17:32.386 "data_size": 65536 00:17:32.386 }, 00:17:32.386 { 00:17:32.386 "name": "BaseBdev4", 00:17:32.386 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:32.386 "is_configured": true, 00:17:32.386 "data_offset": 0, 00:17:32.386 "data_size": 65536 00:17:32.386 } 00:17:32.386 ] 00:17:32.386 }' 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.386 12:48:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.386 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.386 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:32.386 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.386 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.386 [2024-11-06 12:48:21.027100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.645 [2024-11-06 12:48:21.080455] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:32.645 [2024-11-06 12:48:21.080584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.645 [2024-11-06 12:48:21.080621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.645 [2024-11-06 12:48:21.080641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.645 "name": "raid_bdev1", 00:17:32.645 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:32.645 "strip_size_kb": 64, 00:17:32.645 "state": "online", 00:17:32.645 "raid_level": "raid5f", 00:17:32.645 "superblock": false, 00:17:32.645 "num_base_bdevs": 4, 00:17:32.645 "num_base_bdevs_discovered": 3, 00:17:32.645 "num_base_bdevs_operational": 3, 00:17:32.645 "base_bdevs_list": [ 00:17:32.645 { 00:17:32.645 "name": null, 00:17:32.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.645 "is_configured": false, 00:17:32.645 "data_offset": 0, 00:17:32.645 "data_size": 65536 00:17:32.645 }, 00:17:32.645 { 00:17:32.645 "name": "BaseBdev2", 00:17:32.645 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:32.645 "is_configured": true, 00:17:32.645 "data_offset": 0, 00:17:32.645 "data_size": 65536 00:17:32.645 }, 00:17:32.645 { 00:17:32.645 "name": "BaseBdev3", 00:17:32.645 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:32.645 "is_configured": true, 00:17:32.645 "data_offset": 0, 00:17:32.645 "data_size": 65536 00:17:32.645 }, 00:17:32.645 { 00:17:32.645 "name": "BaseBdev4", 00:17:32.645 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:32.645 "is_configured": true, 00:17:32.645 "data_offset": 0, 00:17:32.645 "data_size": 65536 00:17:32.645 } 00:17:32.645 ] 00:17:32.645 }' 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.645 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.210 "name": "raid_bdev1", 00:17:33.210 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:33.210 "strip_size_kb": 64, 00:17:33.210 "state": "online", 00:17:33.210 "raid_level": "raid5f", 00:17:33.210 "superblock": false, 00:17:33.210 "num_base_bdevs": 4, 00:17:33.210 "num_base_bdevs_discovered": 3, 00:17:33.210 "num_base_bdevs_operational": 3, 00:17:33.210 "base_bdevs_list": [ 00:17:33.210 { 00:17:33.210 "name": null, 00:17:33.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.210 "is_configured": false, 00:17:33.210 "data_offset": 0, 00:17:33.210 "data_size": 65536 00:17:33.210 }, 00:17:33.210 { 00:17:33.210 "name": "BaseBdev2", 00:17:33.210 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:33.210 "is_configured": true, 00:17:33.210 "data_offset": 0, 00:17:33.210 "data_size": 65536 00:17:33.210 }, 00:17:33.210 { 00:17:33.210 "name": "BaseBdev3", 00:17:33.210 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:33.210 "is_configured": true, 00:17:33.210 "data_offset": 0, 00:17:33.210 "data_size": 65536 00:17:33.210 }, 00:17:33.210 { 00:17:33.210 "name": "BaseBdev4", 00:17:33.210 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:33.210 "is_configured": true, 00:17:33.210 "data_offset": 0, 00:17:33.210 "data_size": 65536 00:17:33.210 } 00:17:33.210 ] 00:17:33.210 }' 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.210 [2024-11-06 12:48:21.804877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.210 [2024-11-06 12:48:21.818907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.210 12:48:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:33.210 [2024-11-06 12:48:21.828802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.585 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.585 "name": "raid_bdev1", 00:17:34.585 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:34.585 "strip_size_kb": 64, 00:17:34.585 "state": "online", 00:17:34.585 "raid_level": "raid5f", 00:17:34.585 "superblock": false, 00:17:34.585 "num_base_bdevs": 4, 00:17:34.585 "num_base_bdevs_discovered": 4, 00:17:34.585 "num_base_bdevs_operational": 4, 00:17:34.585 "process": { 00:17:34.585 "type": "rebuild", 00:17:34.585 "target": "spare", 00:17:34.585 "progress": { 00:17:34.585 "blocks": 17280, 00:17:34.585 "percent": 8 00:17:34.585 } 00:17:34.585 }, 00:17:34.585 "base_bdevs_list": [ 00:17:34.585 { 00:17:34.585 "name": "spare", 00:17:34.585 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:34.585 "is_configured": true, 00:17:34.585 "data_offset": 0, 00:17:34.585 "data_size": 65536 00:17:34.585 }, 00:17:34.585 { 00:17:34.585 "name": "BaseBdev2", 00:17:34.585 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:34.585 "is_configured": true, 00:17:34.585 "data_offset": 0, 00:17:34.585 "data_size": 65536 00:17:34.585 }, 00:17:34.585 { 00:17:34.585 "name": "BaseBdev3", 00:17:34.585 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:34.586 "is_configured": true, 00:17:34.586 "data_offset": 0, 00:17:34.586 "data_size": 65536 00:17:34.586 }, 00:17:34.586 { 00:17:34.586 "name": "BaseBdev4", 00:17:34.586 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:34.586 "is_configured": true, 00:17:34.586 "data_offset": 0, 00:17:34.586 "data_size": 65536 00:17:34.586 } 00:17:34.586 ] 00:17:34.586 }' 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=676 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.586 12:48:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.586 "name": "raid_bdev1", 00:17:34.586 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:34.586 "strip_size_kb": 64, 00:17:34.586 "state": "online", 00:17:34.586 "raid_level": "raid5f", 00:17:34.586 "superblock": false, 00:17:34.586 "num_base_bdevs": 4, 00:17:34.586 "num_base_bdevs_discovered": 4, 00:17:34.586 "num_base_bdevs_operational": 4, 00:17:34.586 "process": { 00:17:34.586 "type": "rebuild", 00:17:34.586 "target": "spare", 00:17:34.586 "progress": { 00:17:34.586 "blocks": 21120, 00:17:34.586 "percent": 10 00:17:34.586 } 00:17:34.586 }, 00:17:34.586 "base_bdevs_list": [ 00:17:34.586 { 00:17:34.586 "name": "spare", 00:17:34.586 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:34.586 "is_configured": true, 00:17:34.586 "data_offset": 0, 00:17:34.586 "data_size": 65536 00:17:34.586 }, 00:17:34.586 { 00:17:34.586 "name": "BaseBdev2", 00:17:34.586 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:34.586 "is_configured": true, 00:17:34.586 "data_offset": 0, 00:17:34.586 "data_size": 65536 00:17:34.586 }, 00:17:34.586 { 00:17:34.586 "name": "BaseBdev3", 00:17:34.586 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:34.586 "is_configured": true, 00:17:34.586 "data_offset": 0, 00:17:34.586 "data_size": 65536 00:17:34.586 }, 00:17:34.586 { 00:17:34.586 "name": "BaseBdev4", 00:17:34.586 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:34.586 "is_configured": true, 00:17:34.586 "data_offset": 0, 00:17:34.586 "data_size": 65536 00:17:34.586 } 00:17:34.586 ] 00:17:34.586 }' 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.586 12:48:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.521 12:48:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.778 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.778 "name": "raid_bdev1", 00:17:35.778 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:35.778 "strip_size_kb": 64, 00:17:35.778 "state": "online", 00:17:35.778 "raid_level": "raid5f", 00:17:35.778 "superblock": false, 00:17:35.778 "num_base_bdevs": 4, 00:17:35.778 "num_base_bdevs_discovered": 4, 00:17:35.778 "num_base_bdevs_operational": 4, 00:17:35.778 "process": { 00:17:35.778 "type": "rebuild", 00:17:35.778 "target": "spare", 00:17:35.778 "progress": { 00:17:35.778 "blocks": 44160, 00:17:35.778 "percent": 22 00:17:35.778 } 00:17:35.778 }, 00:17:35.778 "base_bdevs_list": [ 00:17:35.778 { 00:17:35.778 "name": "spare", 00:17:35.778 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:35.778 "is_configured": true, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 65536 00:17:35.778 }, 00:17:35.778 { 00:17:35.778 "name": "BaseBdev2", 00:17:35.778 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:35.778 "is_configured": true, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 65536 00:17:35.778 }, 00:17:35.778 { 00:17:35.778 "name": "BaseBdev3", 00:17:35.778 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:35.778 "is_configured": true, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 65536 00:17:35.778 }, 00:17:35.778 { 00:17:35.778 "name": "BaseBdev4", 00:17:35.778 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:35.778 "is_configured": true, 00:17:35.778 "data_offset": 0, 00:17:35.778 "data_size": 65536 00:17:35.778 } 00:17:35.778 ] 00:17:35.778 }' 00:17:35.778 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.778 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.778 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.778 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.778 12:48:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.709 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.710 12:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.710 12:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.710 12:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.710 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.710 "name": "raid_bdev1", 00:17:36.710 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:36.710 "strip_size_kb": 64, 00:17:36.710 "state": "online", 00:17:36.710 "raid_level": "raid5f", 00:17:36.710 "superblock": false, 00:17:36.710 "num_base_bdevs": 4, 00:17:36.710 "num_base_bdevs_discovered": 4, 00:17:36.710 "num_base_bdevs_operational": 4, 00:17:36.710 "process": { 00:17:36.710 "type": "rebuild", 00:17:36.710 "target": "spare", 00:17:36.710 "progress": { 00:17:36.710 "blocks": 65280, 00:17:36.710 "percent": 33 00:17:36.710 } 00:17:36.710 }, 00:17:36.710 "base_bdevs_list": [ 00:17:36.710 { 00:17:36.710 "name": "spare", 00:17:36.710 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:36.710 "is_configured": true, 00:17:36.710 "data_offset": 0, 00:17:36.710 "data_size": 65536 00:17:36.710 }, 00:17:36.710 { 00:17:36.710 "name": "BaseBdev2", 00:17:36.710 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:36.710 "is_configured": true, 00:17:36.710 "data_offset": 0, 00:17:36.710 "data_size": 65536 00:17:36.710 }, 00:17:36.710 { 00:17:36.710 "name": "BaseBdev3", 00:17:36.710 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:36.710 "is_configured": true, 00:17:36.710 "data_offset": 0, 00:17:36.710 "data_size": 65536 00:17:36.710 }, 00:17:36.710 { 00:17:36.710 "name": "BaseBdev4", 00:17:36.710 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:36.710 "is_configured": true, 00:17:36.710 "data_offset": 0, 00:17:36.710 "data_size": 65536 00:17:36.710 } 00:17:36.710 ] 00:17:36.710 }' 00:17:36.710 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.973 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.973 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.973 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.973 12:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.937 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.937 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.938 "name": "raid_bdev1", 00:17:37.938 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:37.938 "strip_size_kb": 64, 00:17:37.938 "state": "online", 00:17:37.938 "raid_level": "raid5f", 00:17:37.938 "superblock": false, 00:17:37.938 "num_base_bdevs": 4, 00:17:37.938 "num_base_bdevs_discovered": 4, 00:17:37.938 "num_base_bdevs_operational": 4, 00:17:37.938 "process": { 00:17:37.938 "type": "rebuild", 00:17:37.938 "target": "spare", 00:17:37.938 "progress": { 00:17:37.938 "blocks": 88320, 00:17:37.938 "percent": 44 00:17:37.938 } 00:17:37.938 }, 00:17:37.938 "base_bdevs_list": [ 00:17:37.938 { 00:17:37.938 "name": "spare", 00:17:37.938 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:37.938 "is_configured": true, 00:17:37.938 "data_offset": 0, 00:17:37.938 "data_size": 65536 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "name": "BaseBdev2", 00:17:37.938 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:37.938 "is_configured": true, 00:17:37.938 "data_offset": 0, 00:17:37.938 "data_size": 65536 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "name": "BaseBdev3", 00:17:37.938 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:37.938 "is_configured": true, 00:17:37.938 "data_offset": 0, 00:17:37.938 "data_size": 65536 00:17:37.938 }, 00:17:37.938 { 00:17:37.938 "name": "BaseBdev4", 00:17:37.938 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:37.938 "is_configured": true, 00:17:37.938 "data_offset": 0, 00:17:37.938 "data_size": 65536 00:17:37.938 } 00:17:37.938 ] 00:17:37.938 }' 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.938 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.196 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.196 12:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.132 "name": "raid_bdev1", 00:17:39.132 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:39.132 "strip_size_kb": 64, 00:17:39.132 "state": "online", 00:17:39.132 "raid_level": "raid5f", 00:17:39.132 "superblock": false, 00:17:39.132 "num_base_bdevs": 4, 00:17:39.132 "num_base_bdevs_discovered": 4, 00:17:39.132 "num_base_bdevs_operational": 4, 00:17:39.132 "process": { 00:17:39.132 "type": "rebuild", 00:17:39.132 "target": "spare", 00:17:39.132 "progress": { 00:17:39.132 "blocks": 109440, 00:17:39.132 "percent": 55 00:17:39.132 } 00:17:39.132 }, 00:17:39.132 "base_bdevs_list": [ 00:17:39.132 { 00:17:39.132 "name": "spare", 00:17:39.132 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 0, 00:17:39.132 "data_size": 65536 00:17:39.132 }, 00:17:39.132 { 00:17:39.132 "name": "BaseBdev2", 00:17:39.132 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 0, 00:17:39.132 "data_size": 65536 00:17:39.132 }, 00:17:39.132 { 00:17:39.132 "name": "BaseBdev3", 00:17:39.132 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 0, 00:17:39.132 "data_size": 65536 00:17:39.132 }, 00:17:39.132 { 00:17:39.132 "name": "BaseBdev4", 00:17:39.132 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 0, 00:17:39.132 "data_size": 65536 00:17:39.132 } 00:17:39.132 ] 00:17:39.132 }' 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.132 12:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.508 "name": "raid_bdev1", 00:17:40.508 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:40.508 "strip_size_kb": 64, 00:17:40.508 "state": "online", 00:17:40.508 "raid_level": "raid5f", 00:17:40.508 "superblock": false, 00:17:40.508 "num_base_bdevs": 4, 00:17:40.508 "num_base_bdevs_discovered": 4, 00:17:40.508 "num_base_bdevs_operational": 4, 00:17:40.508 "process": { 00:17:40.508 "type": "rebuild", 00:17:40.508 "target": "spare", 00:17:40.508 "progress": { 00:17:40.508 "blocks": 130560, 00:17:40.508 "percent": 66 00:17:40.508 } 00:17:40.508 }, 00:17:40.508 "base_bdevs_list": [ 00:17:40.508 { 00:17:40.508 "name": "spare", 00:17:40.508 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:40.508 "is_configured": true, 00:17:40.508 "data_offset": 0, 00:17:40.508 "data_size": 65536 00:17:40.508 }, 00:17:40.508 { 00:17:40.508 "name": "BaseBdev2", 00:17:40.508 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:40.508 "is_configured": true, 00:17:40.508 "data_offset": 0, 00:17:40.508 "data_size": 65536 00:17:40.508 }, 00:17:40.508 { 00:17:40.508 "name": "BaseBdev3", 00:17:40.508 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:40.508 "is_configured": true, 00:17:40.508 "data_offset": 0, 00:17:40.508 "data_size": 65536 00:17:40.508 }, 00:17:40.508 { 00:17:40.508 "name": "BaseBdev4", 00:17:40.508 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:40.508 "is_configured": true, 00:17:40.508 "data_offset": 0, 00:17:40.508 "data_size": 65536 00:17:40.508 } 00:17:40.508 ] 00:17:40.508 }' 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.508 12:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.441 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.442 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.442 12:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.442 12:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.442 12:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.442 12:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.442 "name": "raid_bdev1", 00:17:41.442 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:41.442 "strip_size_kb": 64, 00:17:41.442 "state": "online", 00:17:41.442 "raid_level": "raid5f", 00:17:41.442 "superblock": false, 00:17:41.442 "num_base_bdevs": 4, 00:17:41.442 "num_base_bdevs_discovered": 4, 00:17:41.442 "num_base_bdevs_operational": 4, 00:17:41.442 "process": { 00:17:41.442 "type": "rebuild", 00:17:41.442 "target": "spare", 00:17:41.442 "progress": { 00:17:41.442 "blocks": 153600, 00:17:41.442 "percent": 78 00:17:41.442 } 00:17:41.442 }, 00:17:41.442 "base_bdevs_list": [ 00:17:41.442 { 00:17:41.442 "name": "spare", 00:17:41.442 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:41.442 "is_configured": true, 00:17:41.442 "data_offset": 0, 00:17:41.442 "data_size": 65536 00:17:41.442 }, 00:17:41.442 { 00:17:41.442 "name": "BaseBdev2", 00:17:41.442 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:41.442 "is_configured": true, 00:17:41.442 "data_offset": 0, 00:17:41.442 "data_size": 65536 00:17:41.442 }, 00:17:41.442 { 00:17:41.442 "name": "BaseBdev3", 00:17:41.442 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:41.442 "is_configured": true, 00:17:41.442 "data_offset": 0, 00:17:41.442 "data_size": 65536 00:17:41.442 }, 00:17:41.442 { 00:17:41.442 "name": "BaseBdev4", 00:17:41.442 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:41.442 "is_configured": true, 00:17:41.442 "data_offset": 0, 00:17:41.442 "data_size": 65536 00:17:41.442 } 00:17:41.442 ] 00:17:41.442 }' 00:17:41.442 12:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.442 12:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.442 12:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.700 12:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.700 12:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.635 "name": "raid_bdev1", 00:17:42.635 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:42.635 "strip_size_kb": 64, 00:17:42.635 "state": "online", 00:17:42.635 "raid_level": "raid5f", 00:17:42.635 "superblock": false, 00:17:42.635 "num_base_bdevs": 4, 00:17:42.635 "num_base_bdevs_discovered": 4, 00:17:42.635 "num_base_bdevs_operational": 4, 00:17:42.635 "process": { 00:17:42.635 "type": "rebuild", 00:17:42.635 "target": "spare", 00:17:42.635 "progress": { 00:17:42.635 "blocks": 174720, 00:17:42.635 "percent": 88 00:17:42.635 } 00:17:42.635 }, 00:17:42.635 "base_bdevs_list": [ 00:17:42.635 { 00:17:42.635 "name": "spare", 00:17:42.635 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:42.635 "is_configured": true, 00:17:42.635 "data_offset": 0, 00:17:42.635 "data_size": 65536 00:17:42.635 }, 00:17:42.635 { 00:17:42.635 "name": "BaseBdev2", 00:17:42.635 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:42.635 "is_configured": true, 00:17:42.635 "data_offset": 0, 00:17:42.635 "data_size": 65536 00:17:42.635 }, 00:17:42.635 { 00:17:42.635 "name": "BaseBdev3", 00:17:42.635 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:42.635 "is_configured": true, 00:17:42.635 "data_offset": 0, 00:17:42.635 "data_size": 65536 00:17:42.635 }, 00:17:42.635 { 00:17:42.635 "name": "BaseBdev4", 00:17:42.635 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:42.635 "is_configured": true, 00:17:42.635 "data_offset": 0, 00:17:42.635 "data_size": 65536 00:17:42.635 } 00:17:42.635 ] 00:17:42.635 }' 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.635 12:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.010 [2024-11-06 12:48:32.250129] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:44.010 [2024-11-06 12:48:32.250229] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:44.010 [2024-11-06 12:48:32.250346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.010 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.010 "name": "raid_bdev1", 00:17:44.010 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:44.010 "strip_size_kb": 64, 00:17:44.010 "state": "online", 00:17:44.010 "raid_level": "raid5f", 00:17:44.010 "superblock": false, 00:17:44.010 "num_base_bdevs": 4, 00:17:44.010 "num_base_bdevs_discovered": 4, 00:17:44.010 "num_base_bdevs_operational": 4, 00:17:44.010 "base_bdevs_list": [ 00:17:44.010 { 00:17:44.010 "name": "spare", 00:17:44.010 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev2", 00:17:44.011 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev3", 00:17:44.011 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev4", 00:17:44.011 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 } 00:17:44.011 ] 00:17:44.011 }' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.011 "name": "raid_bdev1", 00:17:44.011 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:44.011 "strip_size_kb": 64, 00:17:44.011 "state": "online", 00:17:44.011 "raid_level": "raid5f", 00:17:44.011 "superblock": false, 00:17:44.011 "num_base_bdevs": 4, 00:17:44.011 "num_base_bdevs_discovered": 4, 00:17:44.011 "num_base_bdevs_operational": 4, 00:17:44.011 "base_bdevs_list": [ 00:17:44.011 { 00:17:44.011 "name": "spare", 00:17:44.011 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev2", 00:17:44.011 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev3", 00:17:44.011 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev4", 00:17:44.011 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 } 00:17:44.011 ] 00:17:44.011 }' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.011 "name": "raid_bdev1", 00:17:44.011 "uuid": "019b701e-3e2c-483d-ad55-e72a64c56883", 00:17:44.011 "strip_size_kb": 64, 00:17:44.011 "state": "online", 00:17:44.011 "raid_level": "raid5f", 00:17:44.011 "superblock": false, 00:17:44.011 "num_base_bdevs": 4, 00:17:44.011 "num_base_bdevs_discovered": 4, 00:17:44.011 "num_base_bdevs_operational": 4, 00:17:44.011 "base_bdevs_list": [ 00:17:44.011 { 00:17:44.011 "name": "spare", 00:17:44.011 "uuid": "44eff691-238b-52c6-a46f-cc193bd94f43", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev2", 00:17:44.011 "uuid": "9b62b2fb-669a-54ba-8c88-a08e2c8e4e03", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev3", 00:17:44.011 "uuid": "3adcfc5f-8d64-518b-b736-ad7521f1f616", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 }, 00:17:44.011 { 00:17:44.011 "name": "BaseBdev4", 00:17:44.011 "uuid": "738c7dfa-ffec-5ab8-9fbd-49aeddbe1b55", 00:17:44.011 "is_configured": true, 00:17:44.011 "data_offset": 0, 00:17:44.011 "data_size": 65536 00:17:44.011 } 00:17:44.011 ] 00:17:44.011 }' 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.011 12:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 [2024-11-06 12:48:33.111116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.579 [2024-11-06 12:48:33.111172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.579 [2024-11-06 12:48:33.111317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.579 [2024-11-06 12:48:33.111485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.579 [2024-11-06 12:48:33.111504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.579 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:45.177 /dev/nbd0 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.177 1+0 records in 00:17:45.177 1+0 records out 00:17:45.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283235 s, 14.5 MB/s 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.177 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:45.435 /dev/nbd1 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:45.435 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.436 1+0 records in 00:17:45.436 1+0 records out 00:17:45.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563515 s, 7.3 MB/s 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.436 12:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.693 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.951 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85106 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85106 ']' 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85106 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:46.209 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85106 00:17:46.209 killing process with pid 85106 00:17:46.209 Received shutdown signal, test time was about 60.000000 seconds 00:17:46.209 00:17:46.209 Latency(us) 00:17:46.210 [2024-11-06T12:48:34.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.210 [2024-11-06T12:48:34.867Z] =================================================================================================================== 00:17:46.210 [2024-11-06T12:48:34.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:46.210 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:46.210 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:46.210 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85106' 00:17:46.210 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 85106 00:17:46.210 [2024-11-06 12:48:34.848172] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.210 12:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 85106 00:17:46.775 [2024-11-06 12:48:35.303636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:48.150 00:17:48.150 real 0m20.532s 00:17:48.150 user 0m25.540s 00:17:48.150 sys 0m2.344s 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.150 ************************************ 00:17:48.150 END TEST raid5f_rebuild_test 00:17:48.150 ************************************ 00:17:48.150 12:48:36 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:48.150 12:48:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:48.150 12:48:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:48.150 12:48:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.150 ************************************ 00:17:48.150 START TEST raid5f_rebuild_test_sb 00:17:48.150 ************************************ 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:48.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85616 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85616 00:17:48.150 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85616 ']' 00:17:48.151 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:48.151 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.151 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:48.151 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.151 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:48.151 12:48:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.151 [2024-11-06 12:48:36.610692] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:17:48.151 [2024-11-06 12:48:36.611059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85616 ] 00:17:48.151 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:48.151 Zero copy mechanism will not be used. 00:17:48.408 [2024-11-06 12:48:36.808746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.408 [2024-11-06 12:48:36.989089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.667 [2024-11-06 12:48:37.252513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.667 [2024-11-06 12:48:37.252595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.971 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:48.971 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:48.971 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.971 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:48.971 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.971 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 BaseBdev1_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 [2024-11-06 12:48:37.671580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.229 [2024-11-06 12:48:37.671702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.229 [2024-11-06 12:48:37.671738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:49.229 [2024-11-06 12:48:37.671757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.229 [2024-11-06 12:48:37.674671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.229 [2024-11-06 12:48:37.674722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.229 BaseBdev1 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 BaseBdev2_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 [2024-11-06 12:48:37.727096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:49.229 [2024-11-06 12:48:37.727177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.229 [2024-11-06 12:48:37.727237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.229 [2024-11-06 12:48:37.727261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.229 [2024-11-06 12:48:37.730228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.229 [2024-11-06 12:48:37.730289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:49.229 BaseBdev2 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 BaseBdev3_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 [2024-11-06 12:48:37.792942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:49.229 [2024-11-06 12:48:37.793151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.229 [2024-11-06 12:48:37.793215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:49.229 [2024-11-06 12:48:37.793242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.229 [2024-11-06 12:48:37.796127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.229 [2024-11-06 12:48:37.796329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:49.229 BaseBdev3 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 BaseBdev4_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.229 [2024-11-06 12:48:37.849210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:49.229 [2024-11-06 12:48:37.849295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.229 [2024-11-06 12:48:37.849326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:49.229 [2024-11-06 12:48:37.849345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.229 [2024-11-06 12:48:37.852331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.229 [2024-11-06 12:48:37.852385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:49.229 BaseBdev4 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.229 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.487 spare_malloc 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.487 spare_delay 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.487 [2024-11-06 12:48:37.916951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.487 [2024-11-06 12:48:37.917035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.487 [2024-11-06 12:48:37.917065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:49.487 [2024-11-06 12:48:37.917083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.487 [2024-11-06 12:48:37.920130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.487 [2024-11-06 12:48:37.920326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.487 spare 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.487 [2024-11-06 12:48:37.929211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.487 [2024-11-06 12:48:37.931810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.487 [2024-11-06 12:48:37.932022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.487 [2024-11-06 12:48:37.932121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.487 [2024-11-06 12:48:37.932415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:49.487 [2024-11-06 12:48:37.932439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:49.487 [2024-11-06 12:48:37.932779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:49.487 [2024-11-06 12:48:37.939710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:49.487 [2024-11-06 12:48:37.939842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:49.487 [2024-11-06 12:48:37.940273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.487 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.488 12:48:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.488 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.488 "name": "raid_bdev1", 00:17:49.488 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:49.488 "strip_size_kb": 64, 00:17:49.488 "state": "online", 00:17:49.488 "raid_level": "raid5f", 00:17:49.488 "superblock": true, 00:17:49.488 "num_base_bdevs": 4, 00:17:49.488 "num_base_bdevs_discovered": 4, 00:17:49.488 "num_base_bdevs_operational": 4, 00:17:49.488 "base_bdevs_list": [ 00:17:49.488 { 00:17:49.488 "name": "BaseBdev1", 00:17:49.488 "uuid": "412f7807-8903-5b37-9883-1a5857b3b658", 00:17:49.488 "is_configured": true, 00:17:49.488 "data_offset": 2048, 00:17:49.488 "data_size": 63488 00:17:49.488 }, 00:17:49.488 { 00:17:49.488 "name": "BaseBdev2", 00:17:49.488 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:49.488 "is_configured": true, 00:17:49.488 "data_offset": 2048, 00:17:49.488 "data_size": 63488 00:17:49.488 }, 00:17:49.488 { 00:17:49.488 "name": "BaseBdev3", 00:17:49.488 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:49.488 "is_configured": true, 00:17:49.488 "data_offset": 2048, 00:17:49.488 "data_size": 63488 00:17:49.488 }, 00:17:49.488 { 00:17:49.488 "name": "BaseBdev4", 00:17:49.488 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:49.488 "is_configured": true, 00:17:49.488 "data_offset": 2048, 00:17:49.488 "data_size": 63488 00:17:49.488 } 00:17:49.488 ] 00:17:49.488 }' 00:17:49.488 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.488 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.053 [2024-11-06 12:48:38.480800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:50.053 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:50.311 [2024-11-06 12:48:38.852695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:50.312 /dev/nbd0 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:50.312 1+0 records in 00:17:50.312 1+0 records out 00:17:50.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679242 s, 6.0 MB/s 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:50.312 12:48:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:51.244 496+0 records in 00:17:51.244 496+0 records out 00:17:51.244 97517568 bytes (98 MB, 93 MiB) copied, 0.761209 s, 128 MB/s 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.244 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:51.501 [2024-11-06 12:48:39.990052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.501 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:51.501 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:51.501 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:51.501 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.501 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.501 12:48:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.501 [2024-11-06 12:48:40.006613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.501 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.502 "name": "raid_bdev1", 00:17:51.502 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:51.502 "strip_size_kb": 64, 00:17:51.502 "state": "online", 00:17:51.502 "raid_level": "raid5f", 00:17:51.502 "superblock": true, 00:17:51.502 "num_base_bdevs": 4, 00:17:51.502 "num_base_bdevs_discovered": 3, 00:17:51.502 "num_base_bdevs_operational": 3, 00:17:51.502 "base_bdevs_list": [ 00:17:51.502 { 00:17:51.502 "name": null, 00:17:51.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.502 "is_configured": false, 00:17:51.502 "data_offset": 0, 00:17:51.502 "data_size": 63488 00:17:51.502 }, 00:17:51.502 { 00:17:51.502 "name": "BaseBdev2", 00:17:51.502 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:51.502 "is_configured": true, 00:17:51.502 "data_offset": 2048, 00:17:51.502 "data_size": 63488 00:17:51.502 }, 00:17:51.502 { 00:17:51.502 "name": "BaseBdev3", 00:17:51.502 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:51.502 "is_configured": true, 00:17:51.502 "data_offset": 2048, 00:17:51.502 "data_size": 63488 00:17:51.502 }, 00:17:51.502 { 00:17:51.502 "name": "BaseBdev4", 00:17:51.502 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:51.502 "is_configured": true, 00:17:51.502 "data_offset": 2048, 00:17:51.502 "data_size": 63488 00:17:51.502 } 00:17:51.502 ] 00:17:51.502 }' 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.502 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.068 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:52.068 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.068 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.068 [2024-11-06 12:48:40.530782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.068 [2024-11-06 12:48:40.545425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:52.068 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.068 12:48:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:52.068 [2024-11-06 12:48:40.554616] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.002 "name": "raid_bdev1", 00:17:53.002 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:53.002 "strip_size_kb": 64, 00:17:53.002 "state": "online", 00:17:53.002 "raid_level": "raid5f", 00:17:53.002 "superblock": true, 00:17:53.002 "num_base_bdevs": 4, 00:17:53.002 "num_base_bdevs_discovered": 4, 00:17:53.002 "num_base_bdevs_operational": 4, 00:17:53.002 "process": { 00:17:53.002 "type": "rebuild", 00:17:53.002 "target": "spare", 00:17:53.002 "progress": { 00:17:53.002 "blocks": 17280, 00:17:53.002 "percent": 9 00:17:53.002 } 00:17:53.002 }, 00:17:53.002 "base_bdevs_list": [ 00:17:53.002 { 00:17:53.002 "name": "spare", 00:17:53.002 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:53.002 "is_configured": true, 00:17:53.002 "data_offset": 2048, 00:17:53.002 "data_size": 63488 00:17:53.002 }, 00:17:53.002 { 00:17:53.002 "name": "BaseBdev2", 00:17:53.002 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:53.002 "is_configured": true, 00:17:53.002 "data_offset": 2048, 00:17:53.002 "data_size": 63488 00:17:53.002 }, 00:17:53.002 { 00:17:53.002 "name": "BaseBdev3", 00:17:53.002 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:53.002 "is_configured": true, 00:17:53.002 "data_offset": 2048, 00:17:53.002 "data_size": 63488 00:17:53.002 }, 00:17:53.002 { 00:17:53.002 "name": "BaseBdev4", 00:17:53.002 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:53.002 "is_configured": true, 00:17:53.002 "data_offset": 2048, 00:17:53.002 "data_size": 63488 00:17:53.002 } 00:17:53.002 ] 00:17:53.002 }' 00:17:53.002 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.260 [2024-11-06 12:48:41.725566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.260 [2024-11-06 12:48:41.769782] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.260 [2024-11-06 12:48:41.769875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.260 [2024-11-06 12:48:41.769902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.260 [2024-11-06 12:48:41.769917] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.260 "name": "raid_bdev1", 00:17:53.260 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:53.260 "strip_size_kb": 64, 00:17:53.260 "state": "online", 00:17:53.260 "raid_level": "raid5f", 00:17:53.260 "superblock": true, 00:17:53.260 "num_base_bdevs": 4, 00:17:53.260 "num_base_bdevs_discovered": 3, 00:17:53.260 "num_base_bdevs_operational": 3, 00:17:53.260 "base_bdevs_list": [ 00:17:53.260 { 00:17:53.260 "name": null, 00:17:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.260 "is_configured": false, 00:17:53.260 "data_offset": 0, 00:17:53.260 "data_size": 63488 00:17:53.260 }, 00:17:53.260 { 00:17:53.260 "name": "BaseBdev2", 00:17:53.260 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:53.260 "is_configured": true, 00:17:53.260 "data_offset": 2048, 00:17:53.260 "data_size": 63488 00:17:53.260 }, 00:17:53.260 { 00:17:53.260 "name": "BaseBdev3", 00:17:53.260 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:53.260 "is_configured": true, 00:17:53.260 "data_offset": 2048, 00:17:53.260 "data_size": 63488 00:17:53.260 }, 00:17:53.260 { 00:17:53.260 "name": "BaseBdev4", 00:17:53.260 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:53.260 "is_configured": true, 00:17:53.260 "data_offset": 2048, 00:17:53.260 "data_size": 63488 00:17:53.260 } 00:17:53.260 ] 00:17:53.260 }' 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.260 12:48:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.827 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.827 "name": "raid_bdev1", 00:17:53.827 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:53.827 "strip_size_kb": 64, 00:17:53.827 "state": "online", 00:17:53.827 "raid_level": "raid5f", 00:17:53.827 "superblock": true, 00:17:53.827 "num_base_bdevs": 4, 00:17:53.827 "num_base_bdevs_discovered": 3, 00:17:53.827 "num_base_bdevs_operational": 3, 00:17:53.827 "base_bdevs_list": [ 00:17:53.828 { 00:17:53.828 "name": null, 00:17:53.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.828 "is_configured": false, 00:17:53.828 "data_offset": 0, 00:17:53.828 "data_size": 63488 00:17:53.828 }, 00:17:53.828 { 00:17:53.828 "name": "BaseBdev2", 00:17:53.828 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:53.828 "is_configured": true, 00:17:53.828 "data_offset": 2048, 00:17:53.828 "data_size": 63488 00:17:53.828 }, 00:17:53.828 { 00:17:53.828 "name": "BaseBdev3", 00:17:53.828 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:53.828 "is_configured": true, 00:17:53.828 "data_offset": 2048, 00:17:53.828 "data_size": 63488 00:17:53.828 }, 00:17:53.828 { 00:17:53.828 "name": "BaseBdev4", 00:17:53.828 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:53.828 "is_configured": true, 00:17:53.828 "data_offset": 2048, 00:17:53.828 "data_size": 63488 00:17:53.828 } 00:17:53.828 ] 00:17:53.828 }' 00:17:53.828 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.828 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.828 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.086 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.086 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.086 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.086 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.086 [2024-11-06 12:48:42.510915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.086 [2024-11-06 12:48:42.525109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:54.086 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.086 12:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:54.086 [2024-11-06 12:48:42.534281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.020 "name": "raid_bdev1", 00:17:55.020 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:55.020 "strip_size_kb": 64, 00:17:55.020 "state": "online", 00:17:55.020 "raid_level": "raid5f", 00:17:55.020 "superblock": true, 00:17:55.020 "num_base_bdevs": 4, 00:17:55.020 "num_base_bdevs_discovered": 4, 00:17:55.020 "num_base_bdevs_operational": 4, 00:17:55.020 "process": { 00:17:55.020 "type": "rebuild", 00:17:55.020 "target": "spare", 00:17:55.020 "progress": { 00:17:55.020 "blocks": 17280, 00:17:55.020 "percent": 9 00:17:55.020 } 00:17:55.020 }, 00:17:55.020 "base_bdevs_list": [ 00:17:55.020 { 00:17:55.020 "name": "spare", 00:17:55.020 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:55.020 "is_configured": true, 00:17:55.020 "data_offset": 2048, 00:17:55.020 "data_size": 63488 00:17:55.020 }, 00:17:55.020 { 00:17:55.020 "name": "BaseBdev2", 00:17:55.020 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:55.020 "is_configured": true, 00:17:55.020 "data_offset": 2048, 00:17:55.020 "data_size": 63488 00:17:55.020 }, 00:17:55.020 { 00:17:55.020 "name": "BaseBdev3", 00:17:55.020 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:55.020 "is_configured": true, 00:17:55.020 "data_offset": 2048, 00:17:55.020 "data_size": 63488 00:17:55.020 }, 00:17:55.020 { 00:17:55.020 "name": "BaseBdev4", 00:17:55.020 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:55.020 "is_configured": true, 00:17:55.020 "data_offset": 2048, 00:17:55.020 "data_size": 63488 00:17:55.020 } 00:17:55.020 ] 00:17:55.020 }' 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.020 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:55.278 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=697 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.278 "name": "raid_bdev1", 00:17:55.278 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:55.278 "strip_size_kb": 64, 00:17:55.278 "state": "online", 00:17:55.278 "raid_level": "raid5f", 00:17:55.278 "superblock": true, 00:17:55.278 "num_base_bdevs": 4, 00:17:55.278 "num_base_bdevs_discovered": 4, 00:17:55.278 "num_base_bdevs_operational": 4, 00:17:55.278 "process": { 00:17:55.278 "type": "rebuild", 00:17:55.278 "target": "spare", 00:17:55.278 "progress": { 00:17:55.278 "blocks": 21120, 00:17:55.278 "percent": 11 00:17:55.278 } 00:17:55.278 }, 00:17:55.278 "base_bdevs_list": [ 00:17:55.278 { 00:17:55.278 "name": "spare", 00:17:55.278 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:55.278 "is_configured": true, 00:17:55.278 "data_offset": 2048, 00:17:55.278 "data_size": 63488 00:17:55.278 }, 00:17:55.278 { 00:17:55.278 "name": "BaseBdev2", 00:17:55.278 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:55.278 "is_configured": true, 00:17:55.278 "data_offset": 2048, 00:17:55.278 "data_size": 63488 00:17:55.278 }, 00:17:55.278 { 00:17:55.278 "name": "BaseBdev3", 00:17:55.278 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:55.278 "is_configured": true, 00:17:55.278 "data_offset": 2048, 00:17:55.278 "data_size": 63488 00:17:55.278 }, 00:17:55.278 { 00:17:55.278 "name": "BaseBdev4", 00:17:55.278 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:55.278 "is_configured": true, 00:17:55.278 "data_offset": 2048, 00:17:55.278 "data_size": 63488 00:17:55.278 } 00:17:55.278 ] 00:17:55.278 }' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.278 12:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.211 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.469 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.469 "name": "raid_bdev1", 00:17:56.469 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:56.469 "strip_size_kb": 64, 00:17:56.469 "state": "online", 00:17:56.469 "raid_level": "raid5f", 00:17:56.469 "superblock": true, 00:17:56.469 "num_base_bdevs": 4, 00:17:56.469 "num_base_bdevs_discovered": 4, 00:17:56.469 "num_base_bdevs_operational": 4, 00:17:56.469 "process": { 00:17:56.469 "type": "rebuild", 00:17:56.469 "target": "spare", 00:17:56.469 "progress": { 00:17:56.469 "blocks": 42240, 00:17:56.469 "percent": 22 00:17:56.469 } 00:17:56.469 }, 00:17:56.469 "base_bdevs_list": [ 00:17:56.469 { 00:17:56.469 "name": "spare", 00:17:56.469 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:56.469 "is_configured": true, 00:17:56.469 "data_offset": 2048, 00:17:56.469 "data_size": 63488 00:17:56.469 }, 00:17:56.469 { 00:17:56.469 "name": "BaseBdev2", 00:17:56.469 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:56.469 "is_configured": true, 00:17:56.469 "data_offset": 2048, 00:17:56.469 "data_size": 63488 00:17:56.469 }, 00:17:56.469 { 00:17:56.469 "name": "BaseBdev3", 00:17:56.469 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:56.469 "is_configured": true, 00:17:56.469 "data_offset": 2048, 00:17:56.469 "data_size": 63488 00:17:56.469 }, 00:17:56.469 { 00:17:56.469 "name": "BaseBdev4", 00:17:56.469 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:56.469 "is_configured": true, 00:17:56.469 "data_offset": 2048, 00:17:56.469 "data_size": 63488 00:17:56.469 } 00:17:56.469 ] 00:17:56.469 }' 00:17:56.469 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.469 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.469 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.469 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.469 12:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.403 12:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.403 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.403 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.403 "name": "raid_bdev1", 00:17:57.403 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:57.403 "strip_size_kb": 64, 00:17:57.403 "state": "online", 00:17:57.403 "raid_level": "raid5f", 00:17:57.403 "superblock": true, 00:17:57.403 "num_base_bdevs": 4, 00:17:57.403 "num_base_bdevs_discovered": 4, 00:17:57.403 "num_base_bdevs_operational": 4, 00:17:57.403 "process": { 00:17:57.403 "type": "rebuild", 00:17:57.403 "target": "spare", 00:17:57.403 "progress": { 00:17:57.403 "blocks": 65280, 00:17:57.403 "percent": 34 00:17:57.403 } 00:17:57.403 }, 00:17:57.403 "base_bdevs_list": [ 00:17:57.403 { 00:17:57.403 "name": "spare", 00:17:57.403 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 2048, 00:17:57.403 "data_size": 63488 00:17:57.403 }, 00:17:57.403 { 00:17:57.403 "name": "BaseBdev2", 00:17:57.403 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 2048, 00:17:57.403 "data_size": 63488 00:17:57.403 }, 00:17:57.403 { 00:17:57.403 "name": "BaseBdev3", 00:17:57.403 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 2048, 00:17:57.403 "data_size": 63488 00:17:57.403 }, 00:17:57.403 { 00:17:57.403 "name": "BaseBdev4", 00:17:57.403 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 2048, 00:17:57.403 "data_size": 63488 00:17:57.403 } 00:17:57.403 ] 00:17:57.403 }' 00:17:57.403 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.662 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.662 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.662 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.662 12:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.600 "name": "raid_bdev1", 00:17:58.600 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:58.600 "strip_size_kb": 64, 00:17:58.600 "state": "online", 00:17:58.600 "raid_level": "raid5f", 00:17:58.600 "superblock": true, 00:17:58.600 "num_base_bdevs": 4, 00:17:58.600 "num_base_bdevs_discovered": 4, 00:17:58.600 "num_base_bdevs_operational": 4, 00:17:58.600 "process": { 00:17:58.600 "type": "rebuild", 00:17:58.600 "target": "spare", 00:17:58.600 "progress": { 00:17:58.600 "blocks": 86400, 00:17:58.600 "percent": 45 00:17:58.600 } 00:17:58.600 }, 00:17:58.600 "base_bdevs_list": [ 00:17:58.600 { 00:17:58.600 "name": "spare", 00:17:58.600 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:58.600 "is_configured": true, 00:17:58.600 "data_offset": 2048, 00:17:58.600 "data_size": 63488 00:17:58.600 }, 00:17:58.600 { 00:17:58.600 "name": "BaseBdev2", 00:17:58.600 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:58.600 "is_configured": true, 00:17:58.600 "data_offset": 2048, 00:17:58.600 "data_size": 63488 00:17:58.600 }, 00:17:58.600 { 00:17:58.600 "name": "BaseBdev3", 00:17:58.600 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:58.600 "is_configured": true, 00:17:58.600 "data_offset": 2048, 00:17:58.600 "data_size": 63488 00:17:58.600 }, 00:17:58.600 { 00:17:58.600 "name": "BaseBdev4", 00:17:58.600 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:58.600 "is_configured": true, 00:17:58.600 "data_offset": 2048, 00:17:58.600 "data_size": 63488 00:17:58.600 } 00:17:58.600 ] 00:17:58.600 }' 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.600 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.859 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.859 12:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.793 "name": "raid_bdev1", 00:17:59.793 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:17:59.793 "strip_size_kb": 64, 00:17:59.793 "state": "online", 00:17:59.793 "raid_level": "raid5f", 00:17:59.793 "superblock": true, 00:17:59.793 "num_base_bdevs": 4, 00:17:59.793 "num_base_bdevs_discovered": 4, 00:17:59.793 "num_base_bdevs_operational": 4, 00:17:59.793 "process": { 00:17:59.793 "type": "rebuild", 00:17:59.793 "target": "spare", 00:17:59.793 "progress": { 00:17:59.793 "blocks": 109440, 00:17:59.793 "percent": 57 00:17:59.793 } 00:17:59.793 }, 00:17:59.793 "base_bdevs_list": [ 00:17:59.793 { 00:17:59.793 "name": "spare", 00:17:59.793 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:17:59.793 "is_configured": true, 00:17:59.793 "data_offset": 2048, 00:17:59.793 "data_size": 63488 00:17:59.793 }, 00:17:59.793 { 00:17:59.793 "name": "BaseBdev2", 00:17:59.793 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:17:59.793 "is_configured": true, 00:17:59.793 "data_offset": 2048, 00:17:59.793 "data_size": 63488 00:17:59.793 }, 00:17:59.793 { 00:17:59.793 "name": "BaseBdev3", 00:17:59.793 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:17:59.793 "is_configured": true, 00:17:59.793 "data_offset": 2048, 00:17:59.793 "data_size": 63488 00:17:59.793 }, 00:17:59.793 { 00:17:59.793 "name": "BaseBdev4", 00:17:59.793 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:17:59.793 "is_configured": true, 00:17:59.793 "data_offset": 2048, 00:17:59.793 "data_size": 63488 00:17:59.793 } 00:17:59.793 ] 00:17:59.793 }' 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.793 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.051 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.051 12:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.985 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.985 "name": "raid_bdev1", 00:18:00.985 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:00.985 "strip_size_kb": 64, 00:18:00.985 "state": "online", 00:18:00.985 "raid_level": "raid5f", 00:18:00.985 "superblock": true, 00:18:00.985 "num_base_bdevs": 4, 00:18:00.985 "num_base_bdevs_discovered": 4, 00:18:00.985 "num_base_bdevs_operational": 4, 00:18:00.986 "process": { 00:18:00.986 "type": "rebuild", 00:18:00.986 "target": "spare", 00:18:00.986 "progress": { 00:18:00.986 "blocks": 130560, 00:18:00.986 "percent": 68 00:18:00.986 } 00:18:00.986 }, 00:18:00.986 "base_bdevs_list": [ 00:18:00.986 { 00:18:00.986 "name": "spare", 00:18:00.986 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:00.986 "is_configured": true, 00:18:00.986 "data_offset": 2048, 00:18:00.986 "data_size": 63488 00:18:00.986 }, 00:18:00.986 { 00:18:00.986 "name": "BaseBdev2", 00:18:00.986 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:00.986 "is_configured": true, 00:18:00.986 "data_offset": 2048, 00:18:00.986 "data_size": 63488 00:18:00.986 }, 00:18:00.986 { 00:18:00.986 "name": "BaseBdev3", 00:18:00.986 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:00.986 "is_configured": true, 00:18:00.986 "data_offset": 2048, 00:18:00.986 "data_size": 63488 00:18:00.986 }, 00:18:00.986 { 00:18:00.986 "name": "BaseBdev4", 00:18:00.986 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:00.986 "is_configured": true, 00:18:00.986 "data_offset": 2048, 00:18:00.986 "data_size": 63488 00:18:00.986 } 00:18:00.986 ] 00:18:00.986 }' 00:18:00.986 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.986 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.986 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.986 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.986 12:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.378 "name": "raid_bdev1", 00:18:02.378 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:02.378 "strip_size_kb": 64, 00:18:02.378 "state": "online", 00:18:02.378 "raid_level": "raid5f", 00:18:02.378 "superblock": true, 00:18:02.378 "num_base_bdevs": 4, 00:18:02.378 "num_base_bdevs_discovered": 4, 00:18:02.378 "num_base_bdevs_operational": 4, 00:18:02.378 "process": { 00:18:02.378 "type": "rebuild", 00:18:02.378 "target": "spare", 00:18:02.378 "progress": { 00:18:02.378 "blocks": 153600, 00:18:02.378 "percent": 80 00:18:02.378 } 00:18:02.378 }, 00:18:02.378 "base_bdevs_list": [ 00:18:02.378 { 00:18:02.378 "name": "spare", 00:18:02.378 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:02.378 "is_configured": true, 00:18:02.378 "data_offset": 2048, 00:18:02.378 "data_size": 63488 00:18:02.378 }, 00:18:02.378 { 00:18:02.378 "name": "BaseBdev2", 00:18:02.378 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:02.378 "is_configured": true, 00:18:02.378 "data_offset": 2048, 00:18:02.378 "data_size": 63488 00:18:02.378 }, 00:18:02.378 { 00:18:02.378 "name": "BaseBdev3", 00:18:02.378 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:02.378 "is_configured": true, 00:18:02.378 "data_offset": 2048, 00:18:02.378 "data_size": 63488 00:18:02.378 }, 00:18:02.378 { 00:18:02.378 "name": "BaseBdev4", 00:18:02.378 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:02.378 "is_configured": true, 00:18:02.378 "data_offset": 2048, 00:18:02.378 "data_size": 63488 00:18:02.378 } 00:18:02.378 ] 00:18:02.378 }' 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.378 12:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.313 "name": "raid_bdev1", 00:18:03.313 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:03.313 "strip_size_kb": 64, 00:18:03.313 "state": "online", 00:18:03.313 "raid_level": "raid5f", 00:18:03.313 "superblock": true, 00:18:03.313 "num_base_bdevs": 4, 00:18:03.313 "num_base_bdevs_discovered": 4, 00:18:03.313 "num_base_bdevs_operational": 4, 00:18:03.313 "process": { 00:18:03.313 "type": "rebuild", 00:18:03.313 "target": "spare", 00:18:03.313 "progress": { 00:18:03.313 "blocks": 174720, 00:18:03.313 "percent": 91 00:18:03.313 } 00:18:03.313 }, 00:18:03.313 "base_bdevs_list": [ 00:18:03.313 { 00:18:03.313 "name": "spare", 00:18:03.313 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:03.313 "is_configured": true, 00:18:03.313 "data_offset": 2048, 00:18:03.313 "data_size": 63488 00:18:03.313 }, 00:18:03.313 { 00:18:03.313 "name": "BaseBdev2", 00:18:03.313 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:03.313 "is_configured": true, 00:18:03.313 "data_offset": 2048, 00:18:03.313 "data_size": 63488 00:18:03.313 }, 00:18:03.313 { 00:18:03.313 "name": "BaseBdev3", 00:18:03.313 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:03.313 "is_configured": true, 00:18:03.313 "data_offset": 2048, 00:18:03.313 "data_size": 63488 00:18:03.313 }, 00:18:03.313 { 00:18:03.313 "name": "BaseBdev4", 00:18:03.313 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:03.313 "is_configured": true, 00:18:03.313 "data_offset": 2048, 00:18:03.313 "data_size": 63488 00:18:03.313 } 00:18:03.313 ] 00:18:03.313 }' 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.313 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.570 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.570 12:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.139 [2024-11-06 12:48:52.649670] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:04.139 [2024-11-06 12:48:52.649840] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:04.139 [2024-11-06 12:48:52.650173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.403 12:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.403 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.403 "name": "raid_bdev1", 00:18:04.403 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:04.403 "strip_size_kb": 64, 00:18:04.403 "state": "online", 00:18:04.403 "raid_level": "raid5f", 00:18:04.403 "superblock": true, 00:18:04.403 "num_base_bdevs": 4, 00:18:04.403 "num_base_bdevs_discovered": 4, 00:18:04.403 "num_base_bdevs_operational": 4, 00:18:04.403 "base_bdevs_list": [ 00:18:04.403 { 00:18:04.403 "name": "spare", 00:18:04.403 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:04.403 "is_configured": true, 00:18:04.403 "data_offset": 2048, 00:18:04.403 "data_size": 63488 00:18:04.403 }, 00:18:04.403 { 00:18:04.403 "name": "BaseBdev2", 00:18:04.403 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:04.403 "is_configured": true, 00:18:04.403 "data_offset": 2048, 00:18:04.403 "data_size": 63488 00:18:04.403 }, 00:18:04.403 { 00:18:04.403 "name": "BaseBdev3", 00:18:04.403 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:04.403 "is_configured": true, 00:18:04.403 "data_offset": 2048, 00:18:04.403 "data_size": 63488 00:18:04.403 }, 00:18:04.403 { 00:18:04.403 "name": "BaseBdev4", 00:18:04.403 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:04.403 "is_configured": true, 00:18:04.403 "data_offset": 2048, 00:18:04.403 "data_size": 63488 00:18:04.403 } 00:18:04.403 ] 00:18:04.403 }' 00:18:04.403 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.662 "name": "raid_bdev1", 00:18:04.662 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:04.662 "strip_size_kb": 64, 00:18:04.662 "state": "online", 00:18:04.662 "raid_level": "raid5f", 00:18:04.662 "superblock": true, 00:18:04.662 "num_base_bdevs": 4, 00:18:04.662 "num_base_bdevs_discovered": 4, 00:18:04.662 "num_base_bdevs_operational": 4, 00:18:04.662 "base_bdevs_list": [ 00:18:04.662 { 00:18:04.662 "name": "spare", 00:18:04.662 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:04.662 "is_configured": true, 00:18:04.662 "data_offset": 2048, 00:18:04.662 "data_size": 63488 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "name": "BaseBdev2", 00:18:04.662 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:04.662 "is_configured": true, 00:18:04.662 "data_offset": 2048, 00:18:04.662 "data_size": 63488 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "name": "BaseBdev3", 00:18:04.662 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:04.662 "is_configured": true, 00:18:04.662 "data_offset": 2048, 00:18:04.662 "data_size": 63488 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "name": "BaseBdev4", 00:18:04.662 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:04.662 "is_configured": true, 00:18:04.662 "data_offset": 2048, 00:18:04.662 "data_size": 63488 00:18:04.662 } 00:18:04.662 ] 00:18:04.662 }' 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.662 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.922 "name": "raid_bdev1", 00:18:04.922 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:04.922 "strip_size_kb": 64, 00:18:04.922 "state": "online", 00:18:04.922 "raid_level": "raid5f", 00:18:04.922 "superblock": true, 00:18:04.922 "num_base_bdevs": 4, 00:18:04.922 "num_base_bdevs_discovered": 4, 00:18:04.922 "num_base_bdevs_operational": 4, 00:18:04.922 "base_bdevs_list": [ 00:18:04.922 { 00:18:04.922 "name": "spare", 00:18:04.922 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:04.922 "is_configured": true, 00:18:04.922 "data_offset": 2048, 00:18:04.922 "data_size": 63488 00:18:04.922 }, 00:18:04.922 { 00:18:04.922 "name": "BaseBdev2", 00:18:04.922 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:04.922 "is_configured": true, 00:18:04.922 "data_offset": 2048, 00:18:04.922 "data_size": 63488 00:18:04.922 }, 00:18:04.922 { 00:18:04.922 "name": "BaseBdev3", 00:18:04.922 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:04.922 "is_configured": true, 00:18:04.922 "data_offset": 2048, 00:18:04.922 "data_size": 63488 00:18:04.922 }, 00:18:04.922 { 00:18:04.922 "name": "BaseBdev4", 00:18:04.922 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:04.922 "is_configured": true, 00:18:04.922 "data_offset": 2048, 00:18:04.922 "data_size": 63488 00:18:04.922 } 00:18:04.922 ] 00:18:04.922 }' 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.922 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.619 [2024-11-06 12:48:53.844607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.619 [2024-11-06 12:48:53.844650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.619 [2024-11-06 12:48:53.844809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.619 [2024-11-06 12:48:53.844973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.619 [2024-11-06 12:48:53.845014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.619 12:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:05.619 /dev/nbd0 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.619 1+0 records in 00:18:05.619 1+0 records out 00:18:05.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273056 s, 15.0 MB/s 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:05.619 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.882 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:05.882 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:05.882 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.882 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.882 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:06.140 /dev/nbd1 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.140 1+0 records in 00:18:06.140 1+0 records out 00:18:06.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465676 s, 8.8 MB/s 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.140 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.141 12:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.707 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.966 [2024-11-06 12:48:55.477762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.966 [2024-11-06 12:48:55.477837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.966 [2024-11-06 12:48:55.477873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:06.966 [2024-11-06 12:48:55.477889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.966 [2024-11-06 12:48:55.481206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.966 [2024-11-06 12:48:55.481259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.966 [2024-11-06 12:48:55.481413] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.966 [2024-11-06 12:48:55.481502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.966 [2024-11-06 12:48:55.481736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.966 [2024-11-06 12:48:55.481946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.966 [2024-11-06 12:48:55.482073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:06.966 spare 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.966 [2024-11-06 12:48:55.582229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:06.966 [2024-11-06 12:48:55.582322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:06.966 [2024-11-06 12:48:55.582837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:06.966 [2024-11-06 12:48:55.589406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:06.966 [2024-11-06 12:48:55.589454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:06.966 [2024-11-06 12:48:55.589750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.966 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.225 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.225 "name": "raid_bdev1", 00:18:07.225 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:07.225 "strip_size_kb": 64, 00:18:07.225 "state": "online", 00:18:07.225 "raid_level": "raid5f", 00:18:07.225 "superblock": true, 00:18:07.225 "num_base_bdevs": 4, 00:18:07.225 "num_base_bdevs_discovered": 4, 00:18:07.225 "num_base_bdevs_operational": 4, 00:18:07.225 "base_bdevs_list": [ 00:18:07.225 { 00:18:07.225 "name": "spare", 00:18:07.225 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:07.225 "is_configured": true, 00:18:07.225 "data_offset": 2048, 00:18:07.225 "data_size": 63488 00:18:07.225 }, 00:18:07.225 { 00:18:07.225 "name": "BaseBdev2", 00:18:07.225 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:07.225 "is_configured": true, 00:18:07.225 "data_offset": 2048, 00:18:07.225 "data_size": 63488 00:18:07.225 }, 00:18:07.225 { 00:18:07.225 "name": "BaseBdev3", 00:18:07.225 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:07.225 "is_configured": true, 00:18:07.225 "data_offset": 2048, 00:18:07.225 "data_size": 63488 00:18:07.225 }, 00:18:07.225 { 00:18:07.226 "name": "BaseBdev4", 00:18:07.226 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:07.226 "is_configured": true, 00:18:07.226 "data_offset": 2048, 00:18:07.226 "data_size": 63488 00:18:07.226 } 00:18:07.226 ] 00:18:07.226 }' 00:18:07.226 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.226 12:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.484 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.783 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.783 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.783 "name": "raid_bdev1", 00:18:07.783 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:07.783 "strip_size_kb": 64, 00:18:07.783 "state": "online", 00:18:07.783 "raid_level": "raid5f", 00:18:07.783 "superblock": true, 00:18:07.783 "num_base_bdevs": 4, 00:18:07.783 "num_base_bdevs_discovered": 4, 00:18:07.783 "num_base_bdevs_operational": 4, 00:18:07.783 "base_bdevs_list": [ 00:18:07.784 { 00:18:07.784 "name": "spare", 00:18:07.784 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 }, 00:18:07.784 { 00:18:07.784 "name": "BaseBdev2", 00:18:07.784 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 }, 00:18:07.784 { 00:18:07.784 "name": "BaseBdev3", 00:18:07.784 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 }, 00:18:07.784 { 00:18:07.784 "name": "BaseBdev4", 00:18:07.784 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 } 00:18:07.784 ] 00:18:07.784 }' 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.784 [2024-11-06 12:48:56.341941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.784 "name": "raid_bdev1", 00:18:07.784 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:07.784 "strip_size_kb": 64, 00:18:07.784 "state": "online", 00:18:07.784 "raid_level": "raid5f", 00:18:07.784 "superblock": true, 00:18:07.784 "num_base_bdevs": 4, 00:18:07.784 "num_base_bdevs_discovered": 3, 00:18:07.784 "num_base_bdevs_operational": 3, 00:18:07.784 "base_bdevs_list": [ 00:18:07.784 { 00:18:07.784 "name": null, 00:18:07.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.784 "is_configured": false, 00:18:07.784 "data_offset": 0, 00:18:07.784 "data_size": 63488 00:18:07.784 }, 00:18:07.784 { 00:18:07.784 "name": "BaseBdev2", 00:18:07.784 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 }, 00:18:07.784 { 00:18:07.784 "name": "BaseBdev3", 00:18:07.784 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 }, 00:18:07.784 { 00:18:07.784 "name": "BaseBdev4", 00:18:07.784 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:07.784 "is_configured": true, 00:18:07.784 "data_offset": 2048, 00:18:07.784 "data_size": 63488 00:18:07.784 } 00:18:07.784 ] 00:18:07.784 }' 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.784 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.351 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.351 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.351 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.351 [2024-11-06 12:48:56.850102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.351 [2024-11-06 12:48:56.850392] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.351 [2024-11-06 12:48:56.850426] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:08.351 [2024-11-06 12:48:56.850476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.351 [2024-11-06 12:48:56.864128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:08.351 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.351 12:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:08.351 [2024-11-06 12:48:56.873129] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.286 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.287 "name": "raid_bdev1", 00:18:09.287 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:09.287 "strip_size_kb": 64, 00:18:09.287 "state": "online", 00:18:09.287 "raid_level": "raid5f", 00:18:09.287 "superblock": true, 00:18:09.287 "num_base_bdevs": 4, 00:18:09.287 "num_base_bdevs_discovered": 4, 00:18:09.287 "num_base_bdevs_operational": 4, 00:18:09.287 "process": { 00:18:09.287 "type": "rebuild", 00:18:09.287 "target": "spare", 00:18:09.287 "progress": { 00:18:09.287 "blocks": 17280, 00:18:09.287 "percent": 9 00:18:09.287 } 00:18:09.287 }, 00:18:09.287 "base_bdevs_list": [ 00:18:09.287 { 00:18:09.287 "name": "spare", 00:18:09.287 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 2048, 00:18:09.287 "data_size": 63488 00:18:09.287 }, 00:18:09.287 { 00:18:09.287 "name": "BaseBdev2", 00:18:09.287 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 2048, 00:18:09.287 "data_size": 63488 00:18:09.287 }, 00:18:09.287 { 00:18:09.287 "name": "BaseBdev3", 00:18:09.287 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 2048, 00:18:09.287 "data_size": 63488 00:18:09.287 }, 00:18:09.287 { 00:18:09.287 "name": "BaseBdev4", 00:18:09.287 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 2048, 00:18:09.287 "data_size": 63488 00:18:09.287 } 00:18:09.287 ] 00:18:09.287 }' 00:18:09.287 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.546 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.546 12:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.546 [2024-11-06 12:48:58.027705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.546 [2024-11-06 12:48:58.087894] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.546 [2024-11-06 12:48:58.088018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.546 [2024-11-06 12:48:58.088046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.546 [2024-11-06 12:48:58.088066] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.546 "name": "raid_bdev1", 00:18:09.546 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:09.546 "strip_size_kb": 64, 00:18:09.546 "state": "online", 00:18:09.546 "raid_level": "raid5f", 00:18:09.546 "superblock": true, 00:18:09.546 "num_base_bdevs": 4, 00:18:09.546 "num_base_bdevs_discovered": 3, 00:18:09.546 "num_base_bdevs_operational": 3, 00:18:09.546 "base_bdevs_list": [ 00:18:09.546 { 00:18:09.546 "name": null, 00:18:09.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.546 "is_configured": false, 00:18:09.546 "data_offset": 0, 00:18:09.546 "data_size": 63488 00:18:09.546 }, 00:18:09.546 { 00:18:09.546 "name": "BaseBdev2", 00:18:09.546 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:09.546 "is_configured": true, 00:18:09.546 "data_offset": 2048, 00:18:09.546 "data_size": 63488 00:18:09.546 }, 00:18:09.546 { 00:18:09.546 "name": "BaseBdev3", 00:18:09.546 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:09.546 "is_configured": true, 00:18:09.546 "data_offset": 2048, 00:18:09.546 "data_size": 63488 00:18:09.546 }, 00:18:09.546 { 00:18:09.546 "name": "BaseBdev4", 00:18:09.546 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:09.546 "is_configured": true, 00:18:09.546 "data_offset": 2048, 00:18:09.546 "data_size": 63488 00:18:09.546 } 00:18:09.546 ] 00:18:09.546 }' 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.546 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.114 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.114 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.114 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.114 [2024-11-06 12:48:58.649043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.114 [2024-11-06 12:48:58.649148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.114 [2024-11-06 12:48:58.649206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:10.114 [2024-11-06 12:48:58.649229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.114 [2024-11-06 12:48:58.649904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.114 [2024-11-06 12:48:58.649937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.114 [2024-11-06 12:48:58.650073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:10.114 [2024-11-06 12:48:58.650100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.114 [2024-11-06 12:48:58.650116] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:10.114 [2024-11-06 12:48:58.650156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.114 [2024-11-06 12:48:58.663817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:10.114 spare 00:18:10.114 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.114 12:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:10.114 [2024-11-06 12:48:58.672723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.050 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.308 "name": "raid_bdev1", 00:18:11.308 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:11.308 "strip_size_kb": 64, 00:18:11.308 "state": "online", 00:18:11.308 "raid_level": "raid5f", 00:18:11.308 "superblock": true, 00:18:11.308 "num_base_bdevs": 4, 00:18:11.308 "num_base_bdevs_discovered": 4, 00:18:11.308 "num_base_bdevs_operational": 4, 00:18:11.308 "process": { 00:18:11.308 "type": "rebuild", 00:18:11.308 "target": "spare", 00:18:11.308 "progress": { 00:18:11.308 "blocks": 17280, 00:18:11.308 "percent": 9 00:18:11.308 } 00:18:11.308 }, 00:18:11.308 "base_bdevs_list": [ 00:18:11.308 { 00:18:11.308 "name": "spare", 00:18:11.308 "uuid": "4e1805a2-ef1e-5270-aa95-60ccac762ddf", 00:18:11.308 "is_configured": true, 00:18:11.308 "data_offset": 2048, 00:18:11.308 "data_size": 63488 00:18:11.308 }, 00:18:11.308 { 00:18:11.308 "name": "BaseBdev2", 00:18:11.308 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:11.308 "is_configured": true, 00:18:11.308 "data_offset": 2048, 00:18:11.308 "data_size": 63488 00:18:11.308 }, 00:18:11.308 { 00:18:11.308 "name": "BaseBdev3", 00:18:11.308 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:11.308 "is_configured": true, 00:18:11.308 "data_offset": 2048, 00:18:11.308 "data_size": 63488 00:18:11.308 }, 00:18:11.308 { 00:18:11.308 "name": "BaseBdev4", 00:18:11.308 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:11.308 "is_configured": true, 00:18:11.308 "data_offset": 2048, 00:18:11.308 "data_size": 63488 00:18:11.308 } 00:18:11.308 ] 00:18:11.308 }' 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.308 [2024-11-06 12:48:59.835667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.308 [2024-11-06 12:48:59.887788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.308 [2024-11-06 12:48:59.888072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.308 [2024-11-06 12:48:59.888261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.308 [2024-11-06 12:48:59.888375] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.308 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.567 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.567 "name": "raid_bdev1", 00:18:11.567 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:11.567 "strip_size_kb": 64, 00:18:11.567 "state": "online", 00:18:11.567 "raid_level": "raid5f", 00:18:11.567 "superblock": true, 00:18:11.567 "num_base_bdevs": 4, 00:18:11.567 "num_base_bdevs_discovered": 3, 00:18:11.567 "num_base_bdevs_operational": 3, 00:18:11.567 "base_bdevs_list": [ 00:18:11.567 { 00:18:11.567 "name": null, 00:18:11.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.567 "is_configured": false, 00:18:11.567 "data_offset": 0, 00:18:11.567 "data_size": 63488 00:18:11.567 }, 00:18:11.567 { 00:18:11.567 "name": "BaseBdev2", 00:18:11.567 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:11.567 "is_configured": true, 00:18:11.567 "data_offset": 2048, 00:18:11.567 "data_size": 63488 00:18:11.567 }, 00:18:11.567 { 00:18:11.567 "name": "BaseBdev3", 00:18:11.567 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:11.567 "is_configured": true, 00:18:11.567 "data_offset": 2048, 00:18:11.567 "data_size": 63488 00:18:11.567 }, 00:18:11.567 { 00:18:11.567 "name": "BaseBdev4", 00:18:11.567 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:11.567 "is_configured": true, 00:18:11.567 "data_offset": 2048, 00:18:11.567 "data_size": 63488 00:18:11.567 } 00:18:11.567 ] 00:18:11.567 }' 00:18:11.567 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.567 12:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.825 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.084 "name": "raid_bdev1", 00:18:12.084 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:12.084 "strip_size_kb": 64, 00:18:12.084 "state": "online", 00:18:12.084 "raid_level": "raid5f", 00:18:12.084 "superblock": true, 00:18:12.084 "num_base_bdevs": 4, 00:18:12.084 "num_base_bdevs_discovered": 3, 00:18:12.084 "num_base_bdevs_operational": 3, 00:18:12.084 "base_bdevs_list": [ 00:18:12.084 { 00:18:12.084 "name": null, 00:18:12.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.084 "is_configured": false, 00:18:12.084 "data_offset": 0, 00:18:12.084 "data_size": 63488 00:18:12.084 }, 00:18:12.084 { 00:18:12.084 "name": "BaseBdev2", 00:18:12.084 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:12.084 "is_configured": true, 00:18:12.084 "data_offset": 2048, 00:18:12.084 "data_size": 63488 00:18:12.084 }, 00:18:12.084 { 00:18:12.084 "name": "BaseBdev3", 00:18:12.084 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:12.084 "is_configured": true, 00:18:12.084 "data_offset": 2048, 00:18:12.084 "data_size": 63488 00:18:12.084 }, 00:18:12.084 { 00:18:12.084 "name": "BaseBdev4", 00:18:12.084 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:12.084 "is_configured": true, 00:18:12.084 "data_offset": 2048, 00:18:12.084 "data_size": 63488 00:18:12.084 } 00:18:12.084 ] 00:18:12.084 }' 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 [2024-11-06 12:49:00.633496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.084 [2024-11-06 12:49:00.633750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.084 [2024-11-06 12:49:00.633796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:12.084 [2024-11-06 12:49:00.633813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.084 [2024-11-06 12:49:00.634475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.084 [2024-11-06 12:49:00.634512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.084 [2024-11-06 12:49:00.634631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:12.084 [2024-11-06 12:49:00.634661] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.084 [2024-11-06 12:49:00.634677] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:12.084 [2024-11-06 12:49:00.634691] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:12.084 BaseBdev1 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 12:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.019 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.020 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.278 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.278 "name": "raid_bdev1", 00:18:13.278 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:13.278 "strip_size_kb": 64, 00:18:13.278 "state": "online", 00:18:13.278 "raid_level": "raid5f", 00:18:13.278 "superblock": true, 00:18:13.278 "num_base_bdevs": 4, 00:18:13.278 "num_base_bdevs_discovered": 3, 00:18:13.278 "num_base_bdevs_operational": 3, 00:18:13.278 "base_bdevs_list": [ 00:18:13.278 { 00:18:13.278 "name": null, 00:18:13.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.278 "is_configured": false, 00:18:13.278 "data_offset": 0, 00:18:13.278 "data_size": 63488 00:18:13.278 }, 00:18:13.278 { 00:18:13.278 "name": "BaseBdev2", 00:18:13.278 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:13.278 "is_configured": true, 00:18:13.278 "data_offset": 2048, 00:18:13.278 "data_size": 63488 00:18:13.278 }, 00:18:13.278 { 00:18:13.278 "name": "BaseBdev3", 00:18:13.278 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:13.278 "is_configured": true, 00:18:13.278 "data_offset": 2048, 00:18:13.278 "data_size": 63488 00:18:13.278 }, 00:18:13.278 { 00:18:13.278 "name": "BaseBdev4", 00:18:13.278 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:13.278 "is_configured": true, 00:18:13.278 "data_offset": 2048, 00:18:13.278 "data_size": 63488 00:18:13.278 } 00:18:13.278 ] 00:18:13.278 }' 00:18:13.278 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.278 12:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.536 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.795 "name": "raid_bdev1", 00:18:13.795 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:13.795 "strip_size_kb": 64, 00:18:13.795 "state": "online", 00:18:13.795 "raid_level": "raid5f", 00:18:13.795 "superblock": true, 00:18:13.795 "num_base_bdevs": 4, 00:18:13.795 "num_base_bdevs_discovered": 3, 00:18:13.795 "num_base_bdevs_operational": 3, 00:18:13.795 "base_bdevs_list": [ 00:18:13.795 { 00:18:13.795 "name": null, 00:18:13.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.795 "is_configured": false, 00:18:13.795 "data_offset": 0, 00:18:13.795 "data_size": 63488 00:18:13.795 }, 00:18:13.795 { 00:18:13.795 "name": "BaseBdev2", 00:18:13.795 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:13.795 "is_configured": true, 00:18:13.795 "data_offset": 2048, 00:18:13.795 "data_size": 63488 00:18:13.795 }, 00:18:13.795 { 00:18:13.795 "name": "BaseBdev3", 00:18:13.795 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:13.795 "is_configured": true, 00:18:13.795 "data_offset": 2048, 00:18:13.795 "data_size": 63488 00:18:13.795 }, 00:18:13.795 { 00:18:13.795 "name": "BaseBdev4", 00:18:13.795 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:13.795 "is_configured": true, 00:18:13.795 "data_offset": 2048, 00:18:13.795 "data_size": 63488 00:18:13.795 } 00:18:13.795 ] 00:18:13.795 }' 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.795 [2024-11-06 12:49:02.346399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.795 [2024-11-06 12:49:02.346655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.795 [2024-11-06 12:49:02.346679] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:13.795 request: 00:18:13.795 { 00:18:13.795 "base_bdev": "BaseBdev1", 00:18:13.795 "raid_bdev": "raid_bdev1", 00:18:13.795 "method": "bdev_raid_add_base_bdev", 00:18:13.795 "req_id": 1 00:18:13.795 } 00:18:13.795 Got JSON-RPC error response 00:18:13.795 response: 00:18:13.795 { 00:18:13.795 "code": -22, 00:18:13.795 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:13.795 } 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.795 12:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.734 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.992 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.992 "name": "raid_bdev1", 00:18:14.992 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:14.992 "strip_size_kb": 64, 00:18:14.992 "state": "online", 00:18:14.992 "raid_level": "raid5f", 00:18:14.992 "superblock": true, 00:18:14.992 "num_base_bdevs": 4, 00:18:14.992 "num_base_bdevs_discovered": 3, 00:18:14.992 "num_base_bdevs_operational": 3, 00:18:14.992 "base_bdevs_list": [ 00:18:14.992 { 00:18:14.992 "name": null, 00:18:14.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.992 "is_configured": false, 00:18:14.992 "data_offset": 0, 00:18:14.992 "data_size": 63488 00:18:14.992 }, 00:18:14.992 { 00:18:14.992 "name": "BaseBdev2", 00:18:14.992 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:14.992 "is_configured": true, 00:18:14.992 "data_offset": 2048, 00:18:14.992 "data_size": 63488 00:18:14.992 }, 00:18:14.992 { 00:18:14.992 "name": "BaseBdev3", 00:18:14.992 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:14.992 "is_configured": true, 00:18:14.992 "data_offset": 2048, 00:18:14.992 "data_size": 63488 00:18:14.992 }, 00:18:14.992 { 00:18:14.992 "name": "BaseBdev4", 00:18:14.992 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:14.992 "is_configured": true, 00:18:14.992 "data_offset": 2048, 00:18:14.992 "data_size": 63488 00:18:14.992 } 00:18:14.992 ] 00:18:14.992 }' 00:18:14.992 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.992 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.249 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.507 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.507 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.507 "name": "raid_bdev1", 00:18:15.507 "uuid": "63ac836e-8335-40c1-b99c-e7db7dde13e0", 00:18:15.507 "strip_size_kb": 64, 00:18:15.507 "state": "online", 00:18:15.507 "raid_level": "raid5f", 00:18:15.507 "superblock": true, 00:18:15.507 "num_base_bdevs": 4, 00:18:15.507 "num_base_bdevs_discovered": 3, 00:18:15.507 "num_base_bdevs_operational": 3, 00:18:15.507 "base_bdevs_list": [ 00:18:15.507 { 00:18:15.507 "name": null, 00:18:15.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.507 "is_configured": false, 00:18:15.507 "data_offset": 0, 00:18:15.507 "data_size": 63488 00:18:15.507 }, 00:18:15.507 { 00:18:15.507 "name": "BaseBdev2", 00:18:15.507 "uuid": "4c2d41f9-25cf-5898-b271-7f487cf9a813", 00:18:15.507 "is_configured": true, 00:18:15.507 "data_offset": 2048, 00:18:15.507 "data_size": 63488 00:18:15.507 }, 00:18:15.507 { 00:18:15.507 "name": "BaseBdev3", 00:18:15.507 "uuid": "bf869b91-fb85-521d-9b58-937cbc7ed3cc", 00:18:15.507 "is_configured": true, 00:18:15.507 "data_offset": 2048, 00:18:15.507 "data_size": 63488 00:18:15.507 }, 00:18:15.507 { 00:18:15.507 "name": "BaseBdev4", 00:18:15.507 "uuid": "7d201a62-aa28-514c-ad34-b11f01b42610", 00:18:15.507 "is_configured": true, 00:18:15.507 "data_offset": 2048, 00:18:15.507 "data_size": 63488 00:18:15.507 } 00:18:15.507 ] 00:18:15.507 }' 00:18:15.507 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.507 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.507 12:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85616 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85616 ']' 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85616 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85616 00:18:15.507 killing process with pid 85616 00:18:15.507 Received shutdown signal, test time was about 60.000000 seconds 00:18:15.507 00:18:15.507 Latency(us) 00:18:15.507 [2024-11-06T12:49:04.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.507 [2024-11-06T12:49:04.164Z] =================================================================================================================== 00:18:15.507 [2024-11-06T12:49:04.164Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85616' 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85616 00:18:15.507 12:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85616 00:18:15.507 [2024-11-06 12:49:04.060945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.507 [2024-11-06 12:49:04.061174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.507 [2024-11-06 12:49:04.061341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.507 [2024-11-06 12:49:04.061373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:16.072 [2024-11-06 12:49:04.589873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.494 12:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:17.494 00:18:17.494 real 0m29.288s 00:18:17.494 user 0m38.043s 00:18:17.494 sys 0m3.085s 00:18:17.494 12:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:17.494 ************************************ 00:18:17.494 END TEST raid5f_rebuild_test_sb 00:18:17.494 ************************************ 00:18:17.494 12:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.494 12:49:05 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:17.494 12:49:05 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:17.494 12:49:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:17.495 12:49:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:17.495 12:49:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.495 ************************************ 00:18:17.495 START TEST raid_state_function_test_sb_4k 00:18:17.495 ************************************ 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:17.495 Process raid pid: 86444 00:18:17.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86444 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86444' 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86444 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86444 ']' 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.495 12:49:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.495 [2024-11-06 12:49:05.956550] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:18:17.495 [2024-11-06 12:49:05.956955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.495 [2024-11-06 12:49:06.146412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.753 [2024-11-06 12:49:06.343975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.066 [2024-11-06 12:49:06.638273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.066 [2024-11-06 12:49:06.638368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.324 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.324 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:18.324 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:18.324 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.324 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.582 [2024-11-06 12:49:06.980786] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.582 [2024-11-06 12:49:06.980860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.582 [2024-11-06 12:49:06.980879] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.582 [2024-11-06 12:49:06.980896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.582 12:49:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.582 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.582 "name": "Existed_Raid", 00:18:18.582 "uuid": "5e2a88e7-9c7c-419a-b626-c35117a8d3ec", 00:18:18.582 "strip_size_kb": 0, 00:18:18.582 "state": "configuring", 00:18:18.582 "raid_level": "raid1", 00:18:18.582 "superblock": true, 00:18:18.582 "num_base_bdevs": 2, 00:18:18.582 "num_base_bdevs_discovered": 0, 00:18:18.582 "num_base_bdevs_operational": 2, 00:18:18.582 "base_bdevs_list": [ 00:18:18.582 { 00:18:18.582 "name": "BaseBdev1", 00:18:18.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.582 "is_configured": false, 00:18:18.582 "data_offset": 0, 00:18:18.582 "data_size": 0 00:18:18.582 }, 00:18:18.582 { 00:18:18.582 "name": "BaseBdev2", 00:18:18.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.582 "is_configured": false, 00:18:18.582 "data_offset": 0, 00:18:18.582 "data_size": 0 00:18:18.582 } 00:18:18.582 ] 00:18:18.582 }' 00:18:18.582 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.582 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.839 [2024-11-06 12:49:07.484916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.839 [2024-11-06 12:49:07.484978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.839 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.839 [2024-11-06 12:49:07.492844] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.839 [2024-11-06 12:49:07.492905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.839 [2024-11-06 12:49:07.492923] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.839 [2024-11-06 12:49:07.492950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.097 [2024-11-06 12:49:07.544144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.097 BaseBdev1 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:19.097 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.098 [ 00:18:19.098 { 00:18:19.098 "name": "BaseBdev1", 00:18:19.098 "aliases": [ 00:18:19.098 "993a4366-cce3-4348-bf77-4e337d52915a" 00:18:19.098 ], 00:18:19.098 "product_name": "Malloc disk", 00:18:19.098 "block_size": 4096, 00:18:19.098 "num_blocks": 8192, 00:18:19.098 "uuid": "993a4366-cce3-4348-bf77-4e337d52915a", 00:18:19.098 "assigned_rate_limits": { 00:18:19.098 "rw_ios_per_sec": 0, 00:18:19.098 "rw_mbytes_per_sec": 0, 00:18:19.098 "r_mbytes_per_sec": 0, 00:18:19.098 "w_mbytes_per_sec": 0 00:18:19.098 }, 00:18:19.098 "claimed": true, 00:18:19.098 "claim_type": "exclusive_write", 00:18:19.098 "zoned": false, 00:18:19.098 "supported_io_types": { 00:18:19.098 "read": true, 00:18:19.098 "write": true, 00:18:19.098 "unmap": true, 00:18:19.098 "flush": true, 00:18:19.098 "reset": true, 00:18:19.098 "nvme_admin": false, 00:18:19.098 "nvme_io": false, 00:18:19.098 "nvme_io_md": false, 00:18:19.098 "write_zeroes": true, 00:18:19.098 "zcopy": true, 00:18:19.098 "get_zone_info": false, 00:18:19.098 "zone_management": false, 00:18:19.098 "zone_append": false, 00:18:19.098 "compare": false, 00:18:19.098 "compare_and_write": false, 00:18:19.098 "abort": true, 00:18:19.098 "seek_hole": false, 00:18:19.098 "seek_data": false, 00:18:19.098 "copy": true, 00:18:19.098 "nvme_iov_md": false 00:18:19.098 }, 00:18:19.098 "memory_domains": [ 00:18:19.098 { 00:18:19.098 "dma_device_id": "system", 00:18:19.098 "dma_device_type": 1 00:18:19.098 }, 00:18:19.098 { 00:18:19.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.098 "dma_device_type": 2 00:18:19.098 } 00:18:19.098 ], 00:18:19.098 "driver_specific": {} 00:18:19.098 } 00:18:19.098 ] 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.098 "name": "Existed_Raid", 00:18:19.098 "uuid": "4e7ddaf0-7979-4774-bca3-db7792408e95", 00:18:19.098 "strip_size_kb": 0, 00:18:19.098 "state": "configuring", 00:18:19.098 "raid_level": "raid1", 00:18:19.098 "superblock": true, 00:18:19.098 "num_base_bdevs": 2, 00:18:19.098 "num_base_bdevs_discovered": 1, 00:18:19.098 "num_base_bdevs_operational": 2, 00:18:19.098 "base_bdevs_list": [ 00:18:19.098 { 00:18:19.098 "name": "BaseBdev1", 00:18:19.098 "uuid": "993a4366-cce3-4348-bf77-4e337d52915a", 00:18:19.098 "is_configured": true, 00:18:19.098 "data_offset": 256, 00:18:19.098 "data_size": 7936 00:18:19.098 }, 00:18:19.098 { 00:18:19.098 "name": "BaseBdev2", 00:18:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.098 "is_configured": false, 00:18:19.098 "data_offset": 0, 00:18:19.098 "data_size": 0 00:18:19.098 } 00:18:19.098 ] 00:18:19.098 }' 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.098 12:49:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.665 [2024-11-06 12:49:08.040308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.665 [2024-11-06 12:49:08.040522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.665 [2024-11-06 12:49:08.052339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.665 [2024-11-06 12:49:08.054885] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.665 [2024-11-06 12:49:08.054944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.665 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.666 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.666 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.666 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.666 "name": "Existed_Raid", 00:18:19.666 "uuid": "37b23a5f-5ef6-44d3-be5c-46cd9b38c6b4", 00:18:19.666 "strip_size_kb": 0, 00:18:19.666 "state": "configuring", 00:18:19.666 "raid_level": "raid1", 00:18:19.666 "superblock": true, 00:18:19.666 "num_base_bdevs": 2, 00:18:19.666 "num_base_bdevs_discovered": 1, 00:18:19.666 "num_base_bdevs_operational": 2, 00:18:19.666 "base_bdevs_list": [ 00:18:19.666 { 00:18:19.666 "name": "BaseBdev1", 00:18:19.666 "uuid": "993a4366-cce3-4348-bf77-4e337d52915a", 00:18:19.666 "is_configured": true, 00:18:19.666 "data_offset": 256, 00:18:19.666 "data_size": 7936 00:18:19.666 }, 00:18:19.666 { 00:18:19.666 "name": "BaseBdev2", 00:18:19.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.666 "is_configured": false, 00:18:19.666 "data_offset": 0, 00:18:19.666 "data_size": 0 00:18:19.666 } 00:18:19.666 ] 00:18:19.666 }' 00:18:19.666 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.666 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.924 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:19.924 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.924 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.182 [2024-11-06 12:49:08.606148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.182 [2024-11-06 12:49:08.606704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:20.182 [2024-11-06 12:49:08.606845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.182 BaseBdev2 00:18:20.182 [2024-11-06 12:49:08.607261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:20.182 [2024-11-06 12:49:08.607490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:20.182 [2024-11-06 12:49:08.607512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:20.182 [2024-11-06 12:49:08.607708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.182 [ 00:18:20.182 { 00:18:20.182 "name": "BaseBdev2", 00:18:20.182 "aliases": [ 00:18:20.182 "2d0519d5-d7f2-45fa-bb60-ee586e4e3812" 00:18:20.182 ], 00:18:20.182 "product_name": "Malloc disk", 00:18:20.182 "block_size": 4096, 00:18:20.182 "num_blocks": 8192, 00:18:20.182 "uuid": "2d0519d5-d7f2-45fa-bb60-ee586e4e3812", 00:18:20.182 "assigned_rate_limits": { 00:18:20.182 "rw_ios_per_sec": 0, 00:18:20.182 "rw_mbytes_per_sec": 0, 00:18:20.182 "r_mbytes_per_sec": 0, 00:18:20.182 "w_mbytes_per_sec": 0 00:18:20.182 }, 00:18:20.182 "claimed": true, 00:18:20.182 "claim_type": "exclusive_write", 00:18:20.182 "zoned": false, 00:18:20.182 "supported_io_types": { 00:18:20.182 "read": true, 00:18:20.182 "write": true, 00:18:20.182 "unmap": true, 00:18:20.182 "flush": true, 00:18:20.182 "reset": true, 00:18:20.182 "nvme_admin": false, 00:18:20.182 "nvme_io": false, 00:18:20.182 "nvme_io_md": false, 00:18:20.182 "write_zeroes": true, 00:18:20.182 "zcopy": true, 00:18:20.182 "get_zone_info": false, 00:18:20.182 "zone_management": false, 00:18:20.182 "zone_append": false, 00:18:20.182 "compare": false, 00:18:20.182 "compare_and_write": false, 00:18:20.182 "abort": true, 00:18:20.182 "seek_hole": false, 00:18:20.182 "seek_data": false, 00:18:20.182 "copy": true, 00:18:20.182 "nvme_iov_md": false 00:18:20.182 }, 00:18:20.182 "memory_domains": [ 00:18:20.182 { 00:18:20.182 "dma_device_id": "system", 00:18:20.182 "dma_device_type": 1 00:18:20.182 }, 00:18:20.182 { 00:18:20.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.182 "dma_device_type": 2 00:18:20.182 } 00:18:20.182 ], 00:18:20.182 "driver_specific": {} 00:18:20.182 } 00:18:20.182 ] 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.182 "name": "Existed_Raid", 00:18:20.182 "uuid": "37b23a5f-5ef6-44d3-be5c-46cd9b38c6b4", 00:18:20.182 "strip_size_kb": 0, 00:18:20.182 "state": "online", 00:18:20.182 "raid_level": "raid1", 00:18:20.182 "superblock": true, 00:18:20.182 "num_base_bdevs": 2, 00:18:20.182 "num_base_bdevs_discovered": 2, 00:18:20.182 "num_base_bdevs_operational": 2, 00:18:20.182 "base_bdevs_list": [ 00:18:20.182 { 00:18:20.182 "name": "BaseBdev1", 00:18:20.182 "uuid": "993a4366-cce3-4348-bf77-4e337d52915a", 00:18:20.182 "is_configured": true, 00:18:20.182 "data_offset": 256, 00:18:20.182 "data_size": 7936 00:18:20.182 }, 00:18:20.182 { 00:18:20.182 "name": "BaseBdev2", 00:18:20.182 "uuid": "2d0519d5-d7f2-45fa-bb60-ee586e4e3812", 00:18:20.182 "is_configured": true, 00:18:20.182 "data_offset": 256, 00:18:20.182 "data_size": 7936 00:18:20.182 } 00:18:20.182 ] 00:18:20.182 }' 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.182 12:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.749 [2024-11-06 12:49:09.154738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.749 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.749 "name": "Existed_Raid", 00:18:20.749 "aliases": [ 00:18:20.749 "37b23a5f-5ef6-44d3-be5c-46cd9b38c6b4" 00:18:20.749 ], 00:18:20.749 "product_name": "Raid Volume", 00:18:20.749 "block_size": 4096, 00:18:20.749 "num_blocks": 7936, 00:18:20.749 "uuid": "37b23a5f-5ef6-44d3-be5c-46cd9b38c6b4", 00:18:20.749 "assigned_rate_limits": { 00:18:20.749 "rw_ios_per_sec": 0, 00:18:20.749 "rw_mbytes_per_sec": 0, 00:18:20.749 "r_mbytes_per_sec": 0, 00:18:20.749 "w_mbytes_per_sec": 0 00:18:20.749 }, 00:18:20.749 "claimed": false, 00:18:20.749 "zoned": false, 00:18:20.749 "supported_io_types": { 00:18:20.749 "read": true, 00:18:20.749 "write": true, 00:18:20.749 "unmap": false, 00:18:20.749 "flush": false, 00:18:20.749 "reset": true, 00:18:20.749 "nvme_admin": false, 00:18:20.749 "nvme_io": false, 00:18:20.749 "nvme_io_md": false, 00:18:20.749 "write_zeroes": true, 00:18:20.749 "zcopy": false, 00:18:20.749 "get_zone_info": false, 00:18:20.749 "zone_management": false, 00:18:20.749 "zone_append": false, 00:18:20.749 "compare": false, 00:18:20.749 "compare_and_write": false, 00:18:20.749 "abort": false, 00:18:20.749 "seek_hole": false, 00:18:20.749 "seek_data": false, 00:18:20.749 "copy": false, 00:18:20.749 "nvme_iov_md": false 00:18:20.749 }, 00:18:20.749 "memory_domains": [ 00:18:20.749 { 00:18:20.749 "dma_device_id": "system", 00:18:20.749 "dma_device_type": 1 00:18:20.749 }, 00:18:20.749 { 00:18:20.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.749 "dma_device_type": 2 00:18:20.749 }, 00:18:20.749 { 00:18:20.749 "dma_device_id": "system", 00:18:20.749 "dma_device_type": 1 00:18:20.749 }, 00:18:20.749 { 00:18:20.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.749 "dma_device_type": 2 00:18:20.749 } 00:18:20.749 ], 00:18:20.749 "driver_specific": { 00:18:20.749 "raid": { 00:18:20.749 "uuid": "37b23a5f-5ef6-44d3-be5c-46cd9b38c6b4", 00:18:20.749 "strip_size_kb": 0, 00:18:20.749 "state": "online", 00:18:20.749 "raid_level": "raid1", 00:18:20.749 "superblock": true, 00:18:20.749 "num_base_bdevs": 2, 00:18:20.749 "num_base_bdevs_discovered": 2, 00:18:20.749 "num_base_bdevs_operational": 2, 00:18:20.749 "base_bdevs_list": [ 00:18:20.749 { 00:18:20.749 "name": "BaseBdev1", 00:18:20.749 "uuid": "993a4366-cce3-4348-bf77-4e337d52915a", 00:18:20.749 "is_configured": true, 00:18:20.749 "data_offset": 256, 00:18:20.749 "data_size": 7936 00:18:20.749 }, 00:18:20.750 { 00:18:20.750 "name": "BaseBdev2", 00:18:20.750 "uuid": "2d0519d5-d7f2-45fa-bb60-ee586e4e3812", 00:18:20.750 "is_configured": true, 00:18:20.750 "data_offset": 256, 00:18:20.750 "data_size": 7936 00:18:20.750 } 00:18:20.750 ] 00:18:20.750 } 00:18:20.750 } 00:18:20.750 }' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:20.750 BaseBdev2' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.750 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.009 [2024-11-06 12:49:09.430511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.009 "name": "Existed_Raid", 00:18:21.009 "uuid": "37b23a5f-5ef6-44d3-be5c-46cd9b38c6b4", 00:18:21.009 "strip_size_kb": 0, 00:18:21.009 "state": "online", 00:18:21.009 "raid_level": "raid1", 00:18:21.009 "superblock": true, 00:18:21.009 "num_base_bdevs": 2, 00:18:21.009 "num_base_bdevs_discovered": 1, 00:18:21.009 "num_base_bdevs_operational": 1, 00:18:21.009 "base_bdevs_list": [ 00:18:21.009 { 00:18:21.009 "name": null, 00:18:21.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.009 "is_configured": false, 00:18:21.009 "data_offset": 0, 00:18:21.009 "data_size": 7936 00:18:21.009 }, 00:18:21.009 { 00:18:21.009 "name": "BaseBdev2", 00:18:21.009 "uuid": "2d0519d5-d7f2-45fa-bb60-ee586e4e3812", 00:18:21.009 "is_configured": true, 00:18:21.009 "data_offset": 256, 00:18:21.009 "data_size": 7936 00:18:21.009 } 00:18:21.009 ] 00:18:21.009 }' 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.009 12:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.576 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 [2024-11-06 12:49:10.147727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.576 [2024-11-06 12:49:10.148043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.833 [2024-11-06 12:49:10.241131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.833 [2024-11-06 12:49:10.241462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.833 [2024-11-06 12:49:10.241501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86444 00:18:21.833 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86444 ']' 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86444 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86444 00:18:21.834 killing process with pid 86444 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86444' 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86444 00:18:21.834 [2024-11-06 12:49:10.338818] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.834 12:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86444 00:18:21.834 [2024-11-06 12:49:10.354047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.826 12:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:22.826 00:18:22.826 real 0m5.640s 00:18:22.826 user 0m8.385s 00:18:22.826 sys 0m0.894s 00:18:22.826 ************************************ 00:18:22.826 END TEST raid_state_function_test_sb_4k 00:18:22.826 ************************************ 00:18:22.826 12:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:22.826 12:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.084 12:49:11 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:23.084 12:49:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:23.084 12:49:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:23.084 12:49:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.084 ************************************ 00:18:23.084 START TEST raid_superblock_test_4k 00:18:23.084 ************************************ 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86702 00:18:23.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86702 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:23.084 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86702 ']' 00:18:23.085 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.085 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.085 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.085 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.085 12:49:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.085 [2024-11-06 12:49:11.643810] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:18:23.085 [2024-11-06 12:49:11.644319] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86702 ] 00:18:23.343 [2024-11-06 12:49:11.831949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.602 [2024-11-06 12:49:12.019289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.602 [2024-11-06 12:49:12.240777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.602 [2024-11-06 12:49:12.240983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 malloc1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 [2024-11-06 12:49:12.726145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.168 [2024-11-06 12:49:12.726383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.168 [2024-11-06 12:49:12.726431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:24.168 [2024-11-06 12:49:12.726450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.168 [2024-11-06 12:49:12.729466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.168 [2024-11-06 12:49:12.729513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.168 pt1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 malloc2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 [2024-11-06 12:49:12.781485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:24.168 [2024-11-06 12:49:12.781690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.168 [2024-11-06 12:49:12.781769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:24.168 [2024-11-06 12:49:12.781879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.168 [2024-11-06 12:49:12.784937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.168 [2024-11-06 12:49:12.785090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:24.168 pt2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 [2024-11-06 12:49:12.789583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.168 [2024-11-06 12:49:12.792333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.168 [2024-11-06 12:49:12.792693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:24.168 [2024-11-06 12:49:12.792828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:24.168 [2024-11-06 12:49:12.793209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:24.168 [2024-11-06 12:49:12.793431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:24.168 [2024-11-06 12:49:12.793458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:24.168 [2024-11-06 12:49:12.793713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.426 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.426 "name": "raid_bdev1", 00:18:24.426 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:24.426 "strip_size_kb": 0, 00:18:24.426 "state": "online", 00:18:24.426 "raid_level": "raid1", 00:18:24.426 "superblock": true, 00:18:24.426 "num_base_bdevs": 2, 00:18:24.426 "num_base_bdevs_discovered": 2, 00:18:24.426 "num_base_bdevs_operational": 2, 00:18:24.426 "base_bdevs_list": [ 00:18:24.426 { 00:18:24.426 "name": "pt1", 00:18:24.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.426 "is_configured": true, 00:18:24.426 "data_offset": 256, 00:18:24.426 "data_size": 7936 00:18:24.426 }, 00:18:24.426 { 00:18:24.426 "name": "pt2", 00:18:24.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.426 "is_configured": true, 00:18:24.426 "data_offset": 256, 00:18:24.426 "data_size": 7936 00:18:24.426 } 00:18:24.426 ] 00:18:24.426 }' 00:18:24.426 12:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.426 12:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.683 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.683 [2024-11-06 12:49:13.326381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.941 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.941 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:24.941 "name": "raid_bdev1", 00:18:24.941 "aliases": [ 00:18:24.941 "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078" 00:18:24.941 ], 00:18:24.941 "product_name": "Raid Volume", 00:18:24.941 "block_size": 4096, 00:18:24.941 "num_blocks": 7936, 00:18:24.941 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:24.941 "assigned_rate_limits": { 00:18:24.941 "rw_ios_per_sec": 0, 00:18:24.941 "rw_mbytes_per_sec": 0, 00:18:24.941 "r_mbytes_per_sec": 0, 00:18:24.941 "w_mbytes_per_sec": 0 00:18:24.941 }, 00:18:24.941 "claimed": false, 00:18:24.941 "zoned": false, 00:18:24.941 "supported_io_types": { 00:18:24.941 "read": true, 00:18:24.941 "write": true, 00:18:24.941 "unmap": false, 00:18:24.941 "flush": false, 00:18:24.941 "reset": true, 00:18:24.941 "nvme_admin": false, 00:18:24.941 "nvme_io": false, 00:18:24.941 "nvme_io_md": false, 00:18:24.941 "write_zeroes": true, 00:18:24.941 "zcopy": false, 00:18:24.941 "get_zone_info": false, 00:18:24.941 "zone_management": false, 00:18:24.941 "zone_append": false, 00:18:24.941 "compare": false, 00:18:24.941 "compare_and_write": false, 00:18:24.941 "abort": false, 00:18:24.941 "seek_hole": false, 00:18:24.941 "seek_data": false, 00:18:24.941 "copy": false, 00:18:24.941 "nvme_iov_md": false 00:18:24.941 }, 00:18:24.941 "memory_domains": [ 00:18:24.941 { 00:18:24.941 "dma_device_id": "system", 00:18:24.941 "dma_device_type": 1 00:18:24.941 }, 00:18:24.941 { 00:18:24.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.941 "dma_device_type": 2 00:18:24.941 }, 00:18:24.941 { 00:18:24.941 "dma_device_id": "system", 00:18:24.941 "dma_device_type": 1 00:18:24.941 }, 00:18:24.941 { 00:18:24.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.941 "dma_device_type": 2 00:18:24.941 } 00:18:24.941 ], 00:18:24.941 "driver_specific": { 00:18:24.941 "raid": { 00:18:24.941 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:24.941 "strip_size_kb": 0, 00:18:24.941 "state": "online", 00:18:24.941 "raid_level": "raid1", 00:18:24.941 "superblock": true, 00:18:24.941 "num_base_bdevs": 2, 00:18:24.941 "num_base_bdevs_discovered": 2, 00:18:24.941 "num_base_bdevs_operational": 2, 00:18:24.941 "base_bdevs_list": [ 00:18:24.941 { 00:18:24.941 "name": "pt1", 00:18:24.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.941 "is_configured": true, 00:18:24.941 "data_offset": 256, 00:18:24.941 "data_size": 7936 00:18:24.941 }, 00:18:24.941 { 00:18:24.941 "name": "pt2", 00:18:24.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.941 "is_configured": true, 00:18:24.941 "data_offset": 256, 00:18:24.941 "data_size": 7936 00:18:24.941 } 00:18:24.941 ] 00:18:24.941 } 00:18:24.941 } 00:18:24.941 }' 00:18:24.941 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:24.941 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:24.941 pt2' 00:18:24.941 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.941 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:24.942 [2024-11-06 12:49:13.578424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.942 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e25d3763-8fa1-4d1a-9c38-3c7f8ff64078 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z e25d3763-8fa1-4d1a-9c38-3c7f8ff64078 ']' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 [2024-11-06 12:49:13.630033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.200 [2024-11-06 12:49:13.630214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.200 [2024-11-06 12:49:13.630493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.200 [2024-11-06 12:49:13.630683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.200 [2024-11-06 12:49:13.630804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 [2024-11-06 12:49:13.786110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:25.200 [2024-11-06 12:49:13.788926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:25.200 [2024-11-06 12:49:13.789031] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:25.200 [2024-11-06 12:49:13.789121] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:25.200 [2024-11-06 12:49:13.789149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.200 [2024-11-06 12:49:13.789174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:25.200 request: 00:18:25.200 { 00:18:25.200 "name": "raid_bdev1", 00:18:25.200 "raid_level": "raid1", 00:18:25.200 "base_bdevs": [ 00:18:25.200 "malloc1", 00:18:25.200 "malloc2" 00:18:25.200 ], 00:18:25.200 "superblock": false, 00:18:25.200 "method": "bdev_raid_create", 00:18:25.200 "req_id": 1 00:18:25.200 } 00:18:25.200 Got JSON-RPC error response 00:18:25.200 response: 00:18:25.200 { 00:18:25.200 "code": -17, 00:18:25.200 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:25.200 } 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.200 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.200 [2024-11-06 12:49:13.854110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.200 [2024-11-06 12:49:13.854360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.200 [2024-11-06 12:49:13.854435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:25.200 [2024-11-06 12:49:13.854546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.490 [2024-11-06 12:49:13.857715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.490 [2024-11-06 12:49:13.857766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.490 [2024-11-06 12:49:13.857892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:25.490 [2024-11-06 12:49:13.857988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.490 pt1 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.490 "name": "raid_bdev1", 00:18:25.490 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:25.490 "strip_size_kb": 0, 00:18:25.490 "state": "configuring", 00:18:25.490 "raid_level": "raid1", 00:18:25.490 "superblock": true, 00:18:25.490 "num_base_bdevs": 2, 00:18:25.490 "num_base_bdevs_discovered": 1, 00:18:25.490 "num_base_bdevs_operational": 2, 00:18:25.490 "base_bdevs_list": [ 00:18:25.490 { 00:18:25.490 "name": "pt1", 00:18:25.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.490 "is_configured": true, 00:18:25.490 "data_offset": 256, 00:18:25.490 "data_size": 7936 00:18:25.490 }, 00:18:25.490 { 00:18:25.490 "name": null, 00:18:25.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.490 "is_configured": false, 00:18:25.490 "data_offset": 256, 00:18:25.490 "data_size": 7936 00:18:25.490 } 00:18:25.490 ] 00:18:25.490 }' 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.490 12:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.765 [2024-11-06 12:49:14.358434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.765 [2024-11-06 12:49:14.358700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.765 [2024-11-06 12:49:14.358854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:25.765 [2024-11-06 12:49:14.358978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.765 [2024-11-06 12:49:14.359702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.765 [2024-11-06 12:49:14.359742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.765 [2024-11-06 12:49:14.359863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:25.765 [2024-11-06 12:49:14.359903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.765 [2024-11-06 12:49:14.360066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:25.765 [2024-11-06 12:49:14.360094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:25.765 [2024-11-06 12:49:14.360423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:25.765 [2024-11-06 12:49:14.360632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:25.765 [2024-11-06 12:49:14.360696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:25.765 [2024-11-06 12:49:14.360898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.765 pt2 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.765 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.766 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.024 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.024 "name": "raid_bdev1", 00:18:26.024 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:26.024 "strip_size_kb": 0, 00:18:26.024 "state": "online", 00:18:26.024 "raid_level": "raid1", 00:18:26.024 "superblock": true, 00:18:26.024 "num_base_bdevs": 2, 00:18:26.024 "num_base_bdevs_discovered": 2, 00:18:26.024 "num_base_bdevs_operational": 2, 00:18:26.024 "base_bdevs_list": [ 00:18:26.024 { 00:18:26.024 "name": "pt1", 00:18:26.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.024 "is_configured": true, 00:18:26.024 "data_offset": 256, 00:18:26.024 "data_size": 7936 00:18:26.024 }, 00:18:26.024 { 00:18:26.024 "name": "pt2", 00:18:26.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.024 "is_configured": true, 00:18:26.024 "data_offset": 256, 00:18:26.024 "data_size": 7936 00:18:26.024 } 00:18:26.024 ] 00:18:26.024 }' 00:18:26.024 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.024 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.282 [2024-11-06 12:49:14.846860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.282 "name": "raid_bdev1", 00:18:26.282 "aliases": [ 00:18:26.282 "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078" 00:18:26.282 ], 00:18:26.282 "product_name": "Raid Volume", 00:18:26.282 "block_size": 4096, 00:18:26.282 "num_blocks": 7936, 00:18:26.282 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:26.282 "assigned_rate_limits": { 00:18:26.282 "rw_ios_per_sec": 0, 00:18:26.282 "rw_mbytes_per_sec": 0, 00:18:26.282 "r_mbytes_per_sec": 0, 00:18:26.282 "w_mbytes_per_sec": 0 00:18:26.282 }, 00:18:26.282 "claimed": false, 00:18:26.282 "zoned": false, 00:18:26.282 "supported_io_types": { 00:18:26.282 "read": true, 00:18:26.282 "write": true, 00:18:26.282 "unmap": false, 00:18:26.282 "flush": false, 00:18:26.282 "reset": true, 00:18:26.282 "nvme_admin": false, 00:18:26.282 "nvme_io": false, 00:18:26.282 "nvme_io_md": false, 00:18:26.282 "write_zeroes": true, 00:18:26.282 "zcopy": false, 00:18:26.282 "get_zone_info": false, 00:18:26.282 "zone_management": false, 00:18:26.282 "zone_append": false, 00:18:26.282 "compare": false, 00:18:26.282 "compare_and_write": false, 00:18:26.282 "abort": false, 00:18:26.282 "seek_hole": false, 00:18:26.282 "seek_data": false, 00:18:26.282 "copy": false, 00:18:26.282 "nvme_iov_md": false 00:18:26.282 }, 00:18:26.282 "memory_domains": [ 00:18:26.282 { 00:18:26.282 "dma_device_id": "system", 00:18:26.282 "dma_device_type": 1 00:18:26.282 }, 00:18:26.282 { 00:18:26.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.282 "dma_device_type": 2 00:18:26.282 }, 00:18:26.282 { 00:18:26.282 "dma_device_id": "system", 00:18:26.282 "dma_device_type": 1 00:18:26.282 }, 00:18:26.282 { 00:18:26.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.282 "dma_device_type": 2 00:18:26.282 } 00:18:26.282 ], 00:18:26.282 "driver_specific": { 00:18:26.282 "raid": { 00:18:26.282 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:26.282 "strip_size_kb": 0, 00:18:26.282 "state": "online", 00:18:26.282 "raid_level": "raid1", 00:18:26.282 "superblock": true, 00:18:26.282 "num_base_bdevs": 2, 00:18:26.282 "num_base_bdevs_discovered": 2, 00:18:26.282 "num_base_bdevs_operational": 2, 00:18:26.282 "base_bdevs_list": [ 00:18:26.282 { 00:18:26.282 "name": "pt1", 00:18:26.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.282 "is_configured": true, 00:18:26.282 "data_offset": 256, 00:18:26.282 "data_size": 7936 00:18:26.282 }, 00:18:26.282 { 00:18:26.282 "name": "pt2", 00:18:26.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.282 "is_configured": true, 00:18:26.282 "data_offset": 256, 00:18:26.282 "data_size": 7936 00:18:26.282 } 00:18:26.282 ] 00:18:26.282 } 00:18:26.282 } 00:18:26.282 }' 00:18:26.282 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.541 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:26.541 pt2' 00:18:26.541 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.541 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:26.541 12:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.541 [2024-11-06 12:49:15.094899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.541 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' e25d3763-8fa1-4d1a-9c38-3c7f8ff64078 '!=' e25d3763-8fa1-4d1a-9c38-3c7f8ff64078 ']' 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 [2024-11-06 12:49:15.146692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.801 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.801 "name": "raid_bdev1", 00:18:26.801 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:26.801 "strip_size_kb": 0, 00:18:26.801 "state": "online", 00:18:26.801 "raid_level": "raid1", 00:18:26.801 "superblock": true, 00:18:26.801 "num_base_bdevs": 2, 00:18:26.801 "num_base_bdevs_discovered": 1, 00:18:26.801 "num_base_bdevs_operational": 1, 00:18:26.801 "base_bdevs_list": [ 00:18:26.801 { 00:18:26.801 "name": null, 00:18:26.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.801 "is_configured": false, 00:18:26.801 "data_offset": 0, 00:18:26.801 "data_size": 7936 00:18:26.801 }, 00:18:26.801 { 00:18:26.801 "name": "pt2", 00:18:26.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.801 "is_configured": true, 00:18:26.801 "data_offset": 256, 00:18:26.801 "data_size": 7936 00:18:26.801 } 00:18:26.801 ] 00:18:26.801 }' 00:18:26.801 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.801 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.060 [2024-11-06 12:49:15.658774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.060 [2024-11-06 12:49:15.658814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.060 [2024-11-06 12:49:15.658934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.060 [2024-11-06 12:49:15.659007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.060 [2024-11-06 12:49:15.659027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.060 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.318 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.318 [2024-11-06 12:49:15.726811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:27.318 [2024-11-06 12:49:15.727076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.318 [2024-11-06 12:49:15.727300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:27.318 [2024-11-06 12:49:15.727437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.318 [2024-11-06 12:49:15.730708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.318 [2024-11-06 12:49:15.730888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:27.318 [2024-11-06 12:49:15.731140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:27.318 [2024-11-06 12:49:15.731250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.318 [2024-11-06 12:49:15.731462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:27.318 [2024-11-06 12:49:15.731487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:27.318 pt2 00:18:27.318 [2024-11-06 12:49:15.731806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:27.318 [2024-11-06 12:49:15.732024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:27.318 [2024-11-06 12:49:15.732040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:27.318 [2024-11-06 12:49:15.732252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.319 "name": "raid_bdev1", 00:18:27.319 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:27.319 "strip_size_kb": 0, 00:18:27.319 "state": "online", 00:18:27.319 "raid_level": "raid1", 00:18:27.319 "superblock": true, 00:18:27.319 "num_base_bdevs": 2, 00:18:27.319 "num_base_bdevs_discovered": 1, 00:18:27.319 "num_base_bdevs_operational": 1, 00:18:27.319 "base_bdevs_list": [ 00:18:27.319 { 00:18:27.319 "name": null, 00:18:27.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.319 "is_configured": false, 00:18:27.319 "data_offset": 256, 00:18:27.319 "data_size": 7936 00:18:27.319 }, 00:18:27.319 { 00:18:27.319 "name": "pt2", 00:18:27.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.319 "is_configured": true, 00:18:27.319 "data_offset": 256, 00:18:27.319 "data_size": 7936 00:18:27.319 } 00:18:27.319 ] 00:18:27.319 }' 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.319 12:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.577 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.577 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.577 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.839 [2024-11-06 12:49:16.235351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.839 [2024-11-06 12:49:16.235404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.839 [2024-11-06 12:49:16.235520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.839 [2024-11-06 12:49:16.235604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.839 [2024-11-06 12:49:16.235621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.839 [2024-11-06 12:49:16.299424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:27.839 [2024-11-06 12:49:16.299677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.839 [2024-11-06 12:49:16.299835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:27.839 [2024-11-06 12:49:16.299952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.839 [2024-11-06 12:49:16.303080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.839 pt1 00:18:27.839 [2024-11-06 12:49:16.303259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:27.839 [2024-11-06 12:49:16.303418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:27.839 [2024-11-06 12:49:16.303500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.839 [2024-11-06 12:49:16.303749] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:27.839 [2024-11-06 12:49:16.303769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.839 [2024-11-06 12:49:16.303793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:27.839 [2024-11-06 12:49:16.303875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.839 [2024-11-06 12:49:16.303994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:27.839 [2024-11-06 12:49:16.304018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:27.839 [2024-11-06 12:49:16.304360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:27.839 [2024-11-06 12:49:16.304556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:27.839 [2024-11-06 12:49:16.304576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.839 [2024-11-06 12:49:16.304766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.839 "name": "raid_bdev1", 00:18:27.839 "uuid": "e25d3763-8fa1-4d1a-9c38-3c7f8ff64078", 00:18:27.839 "strip_size_kb": 0, 00:18:27.839 "state": "online", 00:18:27.839 "raid_level": "raid1", 00:18:27.839 "superblock": true, 00:18:27.839 "num_base_bdevs": 2, 00:18:27.839 "num_base_bdevs_discovered": 1, 00:18:27.839 "num_base_bdevs_operational": 1, 00:18:27.839 "base_bdevs_list": [ 00:18:27.839 { 00:18:27.839 "name": null, 00:18:27.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.839 "is_configured": false, 00:18:27.839 "data_offset": 256, 00:18:27.839 "data_size": 7936 00:18:27.839 }, 00:18:27.839 { 00:18:27.839 "name": "pt2", 00:18:27.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.839 "is_configured": true, 00:18:27.839 "data_offset": 256, 00:18:27.839 "data_size": 7936 00:18:27.839 } 00:18:27.839 ] 00:18:27.839 }' 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.839 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.418 [2024-11-06 12:49:16.880185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' e25d3763-8fa1-4d1a-9c38-3c7f8ff64078 '!=' e25d3763-8fa1-4d1a-9c38-3c7f8ff64078 ']' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86702 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86702 ']' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86702 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86702 00:18:28.418 killing process with pid 86702 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86702' 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86702 00:18:28.418 [2024-11-06 12:49:16.954988] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.418 12:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86702 00:18:28.418 [2024-11-06 12:49:16.955155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.418 [2024-11-06 12:49:16.955264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.418 [2024-11-06 12:49:16.955301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:28.686 [2024-11-06 12:49:17.184557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.655 12:49:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:29.655 00:18:29.655 real 0m6.765s 00:18:29.655 user 0m10.578s 00:18:29.655 sys 0m1.053s 00:18:29.655 12:49:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:29.655 12:49:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.655 ************************************ 00:18:29.655 END TEST raid_superblock_test_4k 00:18:29.655 ************************************ 00:18:29.918 12:49:18 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:29.918 12:49:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:29.918 12:49:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:29.918 12:49:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:29.918 12:49:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.918 ************************************ 00:18:29.918 START TEST raid_rebuild_test_sb_4k 00:18:29.918 ************************************ 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:29.918 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87029 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87029 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 87029 ']' 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.919 12:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.919 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.919 Zero copy mechanism will not be used. 00:18:29.919 [2024-11-06 12:49:18.453829] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:18:29.919 [2024-11-06 12:49:18.453990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87029 ] 00:18:30.186 [2024-11-06 12:49:18.638425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.186 [2024-11-06 12:49:18.806760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.460 [2024-11-06 12:49:19.028648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.460 [2024-11-06 12:49:19.028746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.027 BaseBdev1_malloc 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.027 [2024-11-06 12:49:19.578439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.027 [2024-11-06 12:49:19.578674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.027 [2024-11-06 12:49:19.578756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.027 [2024-11-06 12:49:19.578929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.027 [2024-11-06 12:49:19.581960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.027 BaseBdev1 00:18:31.027 [2024-11-06 12:49:19.582145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.027 BaseBdev2_malloc 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.027 [2024-11-06 12:49:19.634044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:31.027 [2024-11-06 12:49:19.634292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.027 [2024-11-06 12:49:19.634371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:31.027 [2024-11-06 12:49:19.634507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.027 [2024-11-06 12:49:19.637548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.027 [2024-11-06 12:49:19.637720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:31.027 BaseBdev2 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.027 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.285 spare_malloc 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.285 spare_delay 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.285 [2024-11-06 12:49:19.717764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.285 [2024-11-06 12:49:19.717991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.285 [2024-11-06 12:49:19.718081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:31.285 [2024-11-06 12:49:19.718109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.285 [2024-11-06 12:49:19.721120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.285 spare 00:18:31.285 [2024-11-06 12:49:19.721301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.285 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.285 [2024-11-06 12:49:19.726005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.285 [2024-11-06 12:49:19.728628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.285 [2024-11-06 12:49:19.729016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.286 [2024-11-06 12:49:19.729047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:31.286 [2024-11-06 12:49:19.729407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:31.286 [2024-11-06 12:49:19.729654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.286 [2024-11-06 12:49:19.729672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.286 [2024-11-06 12:49:19.729922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.286 "name": "raid_bdev1", 00:18:31.286 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:31.286 "strip_size_kb": 0, 00:18:31.286 "state": "online", 00:18:31.286 "raid_level": "raid1", 00:18:31.286 "superblock": true, 00:18:31.286 "num_base_bdevs": 2, 00:18:31.286 "num_base_bdevs_discovered": 2, 00:18:31.286 "num_base_bdevs_operational": 2, 00:18:31.286 "base_bdevs_list": [ 00:18:31.286 { 00:18:31.286 "name": "BaseBdev1", 00:18:31.286 "uuid": "5c71294d-9008-5281-aa68-780658ec6eb4", 00:18:31.286 "is_configured": true, 00:18:31.286 "data_offset": 256, 00:18:31.286 "data_size": 7936 00:18:31.286 }, 00:18:31.286 { 00:18:31.286 "name": "BaseBdev2", 00:18:31.286 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:31.286 "is_configured": true, 00:18:31.286 "data_offset": 256, 00:18:31.286 "data_size": 7936 00:18:31.286 } 00:18:31.286 ] 00:18:31.286 }' 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.286 12:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.853 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.853 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.853 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.854 [2024-11-06 12:49:20.246579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.854 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:32.112 [2024-11-06 12:49:20.658385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:32.112 /dev/nbd0 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.112 1+0 records in 00:18:32.112 1+0 records out 00:18:32.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440815 s, 9.3 MB/s 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:32.112 12:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:33.046 7936+0 records in 00:18:33.046 7936+0 records out 00:18:33.046 32505856 bytes (33 MB, 31 MiB) copied, 0.97954 s, 33.2 MB/s 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.303 12:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:33.561 [2024-11-06 12:49:22.022032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.561 [2024-11-06 12:49:22.054165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.561 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.562 "name": "raid_bdev1", 00:18:33.562 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:33.562 "strip_size_kb": 0, 00:18:33.562 "state": "online", 00:18:33.562 "raid_level": "raid1", 00:18:33.562 "superblock": true, 00:18:33.562 "num_base_bdevs": 2, 00:18:33.562 "num_base_bdevs_discovered": 1, 00:18:33.562 "num_base_bdevs_operational": 1, 00:18:33.562 "base_bdevs_list": [ 00:18:33.562 { 00:18:33.562 "name": null, 00:18:33.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.562 "is_configured": false, 00:18:33.562 "data_offset": 0, 00:18:33.562 "data_size": 7936 00:18:33.562 }, 00:18:33.562 { 00:18:33.562 "name": "BaseBdev2", 00:18:33.562 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:33.562 "is_configured": true, 00:18:33.562 "data_offset": 256, 00:18:33.562 "data_size": 7936 00:18:33.562 } 00:18:33.562 ] 00:18:33.562 }' 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.562 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.127 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.127 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.127 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.127 [2024-11-06 12:49:22.554343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.127 [2024-11-06 12:49:22.571912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:34.127 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.127 12:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:34.127 [2024-11-06 12:49:22.574656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.065 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.066 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.066 "name": "raid_bdev1", 00:18:35.066 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:35.066 "strip_size_kb": 0, 00:18:35.066 "state": "online", 00:18:35.066 "raid_level": "raid1", 00:18:35.066 "superblock": true, 00:18:35.066 "num_base_bdevs": 2, 00:18:35.066 "num_base_bdevs_discovered": 2, 00:18:35.066 "num_base_bdevs_operational": 2, 00:18:35.066 "process": { 00:18:35.066 "type": "rebuild", 00:18:35.066 "target": "spare", 00:18:35.066 "progress": { 00:18:35.066 "blocks": 2560, 00:18:35.066 "percent": 32 00:18:35.066 } 00:18:35.066 }, 00:18:35.066 "base_bdevs_list": [ 00:18:35.066 { 00:18:35.066 "name": "spare", 00:18:35.066 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:35.066 "is_configured": true, 00:18:35.066 "data_offset": 256, 00:18:35.066 "data_size": 7936 00:18:35.066 }, 00:18:35.066 { 00:18:35.066 "name": "BaseBdev2", 00:18:35.066 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:35.066 "is_configured": true, 00:18:35.066 "data_offset": 256, 00:18:35.066 "data_size": 7936 00:18:35.066 } 00:18:35.066 ] 00:18:35.066 }' 00:18:35.066 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.066 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.066 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.326 [2024-11-06 12:49:23.736371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.326 [2024-11-06 12:49:23.786104] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.326 [2024-11-06 12:49:23.786291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.326 [2024-11-06 12:49:23.786321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.326 [2024-11-06 12:49:23.786343] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.326 "name": "raid_bdev1", 00:18:35.326 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:35.326 "strip_size_kb": 0, 00:18:35.326 "state": "online", 00:18:35.326 "raid_level": "raid1", 00:18:35.326 "superblock": true, 00:18:35.326 "num_base_bdevs": 2, 00:18:35.326 "num_base_bdevs_discovered": 1, 00:18:35.326 "num_base_bdevs_operational": 1, 00:18:35.326 "base_bdevs_list": [ 00:18:35.326 { 00:18:35.326 "name": null, 00:18:35.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.326 "is_configured": false, 00:18:35.326 "data_offset": 0, 00:18:35.326 "data_size": 7936 00:18:35.326 }, 00:18:35.326 { 00:18:35.326 "name": "BaseBdev2", 00:18:35.326 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:35.326 "is_configured": true, 00:18:35.326 "data_offset": 256, 00:18:35.326 "data_size": 7936 00:18:35.326 } 00:18:35.326 ] 00:18:35.326 }' 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.326 12:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.893 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.893 "name": "raid_bdev1", 00:18:35.893 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:35.894 "strip_size_kb": 0, 00:18:35.894 "state": "online", 00:18:35.894 "raid_level": "raid1", 00:18:35.894 "superblock": true, 00:18:35.894 "num_base_bdevs": 2, 00:18:35.894 "num_base_bdevs_discovered": 1, 00:18:35.894 "num_base_bdevs_operational": 1, 00:18:35.894 "base_bdevs_list": [ 00:18:35.894 { 00:18:35.894 "name": null, 00:18:35.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.894 "is_configured": false, 00:18:35.894 "data_offset": 0, 00:18:35.894 "data_size": 7936 00:18:35.894 }, 00:18:35.894 { 00:18:35.894 "name": "BaseBdev2", 00:18:35.894 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:35.894 "is_configured": true, 00:18:35.894 "data_offset": 256, 00:18:35.894 "data_size": 7936 00:18:35.894 } 00:18:35.894 ] 00:18:35.894 }' 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 [2024-11-06 12:49:24.481381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.894 [2024-11-06 12:49:24.498115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.894 12:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:35.894 [2024-11-06 12:49:24.500848] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.287 "name": "raid_bdev1", 00:18:37.287 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:37.287 "strip_size_kb": 0, 00:18:37.287 "state": "online", 00:18:37.287 "raid_level": "raid1", 00:18:37.287 "superblock": true, 00:18:37.287 "num_base_bdevs": 2, 00:18:37.287 "num_base_bdevs_discovered": 2, 00:18:37.287 "num_base_bdevs_operational": 2, 00:18:37.287 "process": { 00:18:37.287 "type": "rebuild", 00:18:37.287 "target": "spare", 00:18:37.287 "progress": { 00:18:37.287 "blocks": 2560, 00:18:37.287 "percent": 32 00:18:37.287 } 00:18:37.287 }, 00:18:37.287 "base_bdevs_list": [ 00:18:37.287 { 00:18:37.287 "name": "spare", 00:18:37.287 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:37.287 "is_configured": true, 00:18:37.287 "data_offset": 256, 00:18:37.287 "data_size": 7936 00:18:37.287 }, 00:18:37.287 { 00:18:37.287 "name": "BaseBdev2", 00:18:37.287 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:37.287 "is_configured": true, 00:18:37.287 "data_offset": 256, 00:18:37.287 "data_size": 7936 00:18:37.287 } 00:18:37.287 ] 00:18:37.287 }' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:37.287 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=739 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.287 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.287 "name": "raid_bdev1", 00:18:37.287 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:37.287 "strip_size_kb": 0, 00:18:37.287 "state": "online", 00:18:37.287 "raid_level": "raid1", 00:18:37.287 "superblock": true, 00:18:37.287 "num_base_bdevs": 2, 00:18:37.287 "num_base_bdevs_discovered": 2, 00:18:37.287 "num_base_bdevs_operational": 2, 00:18:37.287 "process": { 00:18:37.287 "type": "rebuild", 00:18:37.287 "target": "spare", 00:18:37.287 "progress": { 00:18:37.287 "blocks": 2816, 00:18:37.287 "percent": 35 00:18:37.287 } 00:18:37.287 }, 00:18:37.287 "base_bdevs_list": [ 00:18:37.287 { 00:18:37.287 "name": "spare", 00:18:37.287 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:37.287 "is_configured": true, 00:18:37.288 "data_offset": 256, 00:18:37.288 "data_size": 7936 00:18:37.288 }, 00:18:37.288 { 00:18:37.288 "name": "BaseBdev2", 00:18:37.288 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:37.288 "is_configured": true, 00:18:37.288 "data_offset": 256, 00:18:37.288 "data_size": 7936 00:18:37.288 } 00:18:37.288 ] 00:18:37.288 }' 00:18:37.288 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.288 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.288 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.288 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.288 12:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.222 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.481 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.481 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.481 "name": "raid_bdev1", 00:18:38.481 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:38.481 "strip_size_kb": 0, 00:18:38.481 "state": "online", 00:18:38.481 "raid_level": "raid1", 00:18:38.481 "superblock": true, 00:18:38.481 "num_base_bdevs": 2, 00:18:38.481 "num_base_bdevs_discovered": 2, 00:18:38.481 "num_base_bdevs_operational": 2, 00:18:38.481 "process": { 00:18:38.481 "type": "rebuild", 00:18:38.481 "target": "spare", 00:18:38.481 "progress": { 00:18:38.481 "blocks": 5888, 00:18:38.481 "percent": 74 00:18:38.481 } 00:18:38.481 }, 00:18:38.481 "base_bdevs_list": [ 00:18:38.481 { 00:18:38.481 "name": "spare", 00:18:38.481 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:38.481 "is_configured": true, 00:18:38.481 "data_offset": 256, 00:18:38.481 "data_size": 7936 00:18:38.481 }, 00:18:38.481 { 00:18:38.481 "name": "BaseBdev2", 00:18:38.481 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:38.481 "is_configured": true, 00:18:38.481 "data_offset": 256, 00:18:38.481 "data_size": 7936 00:18:38.481 } 00:18:38.481 ] 00:18:38.481 }' 00:18:38.481 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.481 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.481 12:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.481 12:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.481 12:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.048 [2024-11-06 12:49:27.630171] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:39.048 [2024-11-06 12:49:27.630313] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:39.048 [2024-11-06 12:49:27.630503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.613 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.613 "name": "raid_bdev1", 00:18:39.613 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:39.613 "strip_size_kb": 0, 00:18:39.613 "state": "online", 00:18:39.613 "raid_level": "raid1", 00:18:39.613 "superblock": true, 00:18:39.613 "num_base_bdevs": 2, 00:18:39.613 "num_base_bdevs_discovered": 2, 00:18:39.613 "num_base_bdevs_operational": 2, 00:18:39.613 "base_bdevs_list": [ 00:18:39.613 { 00:18:39.613 "name": "spare", 00:18:39.614 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:39.614 "is_configured": true, 00:18:39.614 "data_offset": 256, 00:18:39.614 "data_size": 7936 00:18:39.614 }, 00:18:39.614 { 00:18:39.614 "name": "BaseBdev2", 00:18:39.614 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:39.614 "is_configured": true, 00:18:39.614 "data_offset": 256, 00:18:39.614 "data_size": 7936 00:18:39.614 } 00:18:39.614 ] 00:18:39.614 }' 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.614 "name": "raid_bdev1", 00:18:39.614 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:39.614 "strip_size_kb": 0, 00:18:39.614 "state": "online", 00:18:39.614 "raid_level": "raid1", 00:18:39.614 "superblock": true, 00:18:39.614 "num_base_bdevs": 2, 00:18:39.614 "num_base_bdevs_discovered": 2, 00:18:39.614 "num_base_bdevs_operational": 2, 00:18:39.614 "base_bdevs_list": [ 00:18:39.614 { 00:18:39.614 "name": "spare", 00:18:39.614 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:39.614 "is_configured": true, 00:18:39.614 "data_offset": 256, 00:18:39.614 "data_size": 7936 00:18:39.614 }, 00:18:39.614 { 00:18:39.614 "name": "BaseBdev2", 00:18:39.614 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:39.614 "is_configured": true, 00:18:39.614 "data_offset": 256, 00:18:39.614 "data_size": 7936 00:18:39.614 } 00:18:39.614 ] 00:18:39.614 }' 00:18:39.614 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.872 "name": "raid_bdev1", 00:18:39.872 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:39.872 "strip_size_kb": 0, 00:18:39.872 "state": "online", 00:18:39.872 "raid_level": "raid1", 00:18:39.872 "superblock": true, 00:18:39.872 "num_base_bdevs": 2, 00:18:39.872 "num_base_bdevs_discovered": 2, 00:18:39.872 "num_base_bdevs_operational": 2, 00:18:39.872 "base_bdevs_list": [ 00:18:39.872 { 00:18:39.872 "name": "spare", 00:18:39.872 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:39.872 "is_configured": true, 00:18:39.872 "data_offset": 256, 00:18:39.872 "data_size": 7936 00:18:39.872 }, 00:18:39.872 { 00:18:39.872 "name": "BaseBdev2", 00:18:39.872 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:39.872 "is_configured": true, 00:18:39.872 "data_offset": 256, 00:18:39.872 "data_size": 7936 00:18:39.872 } 00:18:39.872 ] 00:18:39.872 }' 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.872 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 [2024-11-06 12:49:28.898599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.445 [2024-11-06 12:49:28.898682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.445 [2024-11-06 12:49:28.898824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.445 [2024-11-06 12:49:28.898943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.445 [2024-11-06 12:49:28.898967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:40.445 12:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:40.703 /dev/nbd0 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.703 1+0 records in 00:18:40.703 1+0 records out 00:18:40.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359406 s, 11.4 MB/s 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:40.703 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:41.268 /dev/nbd1 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.268 1+0 records in 00:18:41.268 1+0 records out 00:18:41.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431062 s, 9.5 MB/s 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:41.268 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.269 12:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.835 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.835 [2024-11-06 12:49:30.490632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:41.835 [2024-11-06 12:49:30.490707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.093 [2024-11-06 12:49:30.490745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:42.093 [2024-11-06 12:49:30.490761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.093 [2024-11-06 12:49:30.493944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.093 [2024-11-06 12:49:30.493991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:42.093 [2024-11-06 12:49:30.494110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:42.093 [2024-11-06 12:49:30.494211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.093 [2024-11-06 12:49:30.494412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.093 spare 00:18:42.093 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.094 [2024-11-06 12:49:30.594553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:42.094 [2024-11-06 12:49:30.594613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:42.094 [2024-11-06 12:49:30.595129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:42.094 [2024-11-06 12:49:30.595459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:42.094 [2024-11-06 12:49:30.595484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:42.094 [2024-11-06 12:49:30.595774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.094 "name": "raid_bdev1", 00:18:42.094 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:42.094 "strip_size_kb": 0, 00:18:42.094 "state": "online", 00:18:42.094 "raid_level": "raid1", 00:18:42.094 "superblock": true, 00:18:42.094 "num_base_bdevs": 2, 00:18:42.094 "num_base_bdevs_discovered": 2, 00:18:42.094 "num_base_bdevs_operational": 2, 00:18:42.094 "base_bdevs_list": [ 00:18:42.094 { 00:18:42.094 "name": "spare", 00:18:42.094 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:42.094 "is_configured": true, 00:18:42.094 "data_offset": 256, 00:18:42.094 "data_size": 7936 00:18:42.094 }, 00:18:42.094 { 00:18:42.094 "name": "BaseBdev2", 00:18:42.094 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:42.094 "is_configured": true, 00:18:42.094 "data_offset": 256, 00:18:42.094 "data_size": 7936 00:18:42.094 } 00:18:42.094 ] 00:18:42.094 }' 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.094 12:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.661 "name": "raid_bdev1", 00:18:42.661 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:42.661 "strip_size_kb": 0, 00:18:42.661 "state": "online", 00:18:42.661 "raid_level": "raid1", 00:18:42.661 "superblock": true, 00:18:42.661 "num_base_bdevs": 2, 00:18:42.661 "num_base_bdevs_discovered": 2, 00:18:42.661 "num_base_bdevs_operational": 2, 00:18:42.661 "base_bdevs_list": [ 00:18:42.661 { 00:18:42.661 "name": "spare", 00:18:42.661 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:42.661 "is_configured": true, 00:18:42.661 "data_offset": 256, 00:18:42.661 "data_size": 7936 00:18:42.661 }, 00:18:42.661 { 00:18:42.661 "name": "BaseBdev2", 00:18:42.661 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:42.661 "is_configured": true, 00:18:42.661 "data_offset": 256, 00:18:42.661 "data_size": 7936 00:18:42.661 } 00:18:42.661 ] 00:18:42.661 }' 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.661 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.919 [2024-11-06 12:49:31.344050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.919 "name": "raid_bdev1", 00:18:42.919 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:42.919 "strip_size_kb": 0, 00:18:42.919 "state": "online", 00:18:42.919 "raid_level": "raid1", 00:18:42.919 "superblock": true, 00:18:42.919 "num_base_bdevs": 2, 00:18:42.919 "num_base_bdevs_discovered": 1, 00:18:42.919 "num_base_bdevs_operational": 1, 00:18:42.919 "base_bdevs_list": [ 00:18:42.919 { 00:18:42.919 "name": null, 00:18:42.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.919 "is_configured": false, 00:18:42.919 "data_offset": 0, 00:18:42.919 "data_size": 7936 00:18:42.919 }, 00:18:42.919 { 00:18:42.919 "name": "BaseBdev2", 00:18:42.919 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:42.919 "is_configured": true, 00:18:42.919 "data_offset": 256, 00:18:42.919 "data_size": 7936 00:18:42.919 } 00:18:42.919 ] 00:18:42.919 }' 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.919 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.486 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.486 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.486 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:43.486 [2024-11-06 12:49:31.864290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.486 [2024-11-06 12:49:31.864581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.486 [2024-11-06 12:49:31.864607] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:43.486 [2024-11-06 12:49:31.864657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.486 [2024-11-06 12:49:31.881811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:43.486 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.486 12:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:43.486 [2024-11-06 12:49:31.884641] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.422 "name": "raid_bdev1", 00:18:44.422 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:44.422 "strip_size_kb": 0, 00:18:44.422 "state": "online", 00:18:44.422 "raid_level": "raid1", 00:18:44.422 "superblock": true, 00:18:44.422 "num_base_bdevs": 2, 00:18:44.422 "num_base_bdevs_discovered": 2, 00:18:44.422 "num_base_bdevs_operational": 2, 00:18:44.422 "process": { 00:18:44.422 "type": "rebuild", 00:18:44.422 "target": "spare", 00:18:44.422 "progress": { 00:18:44.422 "blocks": 2560, 00:18:44.422 "percent": 32 00:18:44.422 } 00:18:44.422 }, 00:18:44.422 "base_bdevs_list": [ 00:18:44.422 { 00:18:44.422 "name": "spare", 00:18:44.422 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:44.422 "is_configured": true, 00:18:44.422 "data_offset": 256, 00:18:44.422 "data_size": 7936 00:18:44.422 }, 00:18:44.422 { 00:18:44.422 "name": "BaseBdev2", 00:18:44.422 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:44.422 "is_configured": true, 00:18:44.422 "data_offset": 256, 00:18:44.422 "data_size": 7936 00:18:44.422 } 00:18:44.422 ] 00:18:44.422 }' 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.422 12:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.422 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.422 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:44.422 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.422 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.422 [2024-11-06 12:49:33.062725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.681 [2024-11-06 12:49:33.096296] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:44.681 [2024-11-06 12:49:33.096399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.681 [2024-11-06 12:49:33.096426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.681 [2024-11-06 12:49:33.096442] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.681 "name": "raid_bdev1", 00:18:44.681 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:44.681 "strip_size_kb": 0, 00:18:44.681 "state": "online", 00:18:44.681 "raid_level": "raid1", 00:18:44.681 "superblock": true, 00:18:44.681 "num_base_bdevs": 2, 00:18:44.681 "num_base_bdevs_discovered": 1, 00:18:44.681 "num_base_bdevs_operational": 1, 00:18:44.681 "base_bdevs_list": [ 00:18:44.681 { 00:18:44.681 "name": null, 00:18:44.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.681 "is_configured": false, 00:18:44.681 "data_offset": 0, 00:18:44.681 "data_size": 7936 00:18:44.681 }, 00:18:44.681 { 00:18:44.681 "name": "BaseBdev2", 00:18:44.681 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:44.681 "is_configured": true, 00:18:44.681 "data_offset": 256, 00:18:44.681 "data_size": 7936 00:18:44.681 } 00:18:44.681 ] 00:18:44.681 }' 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.681 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:45.247 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.247 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.247 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:45.247 [2024-11-06 12:49:33.642873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.247 [2024-11-06 12:49:33.642993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.247 [2024-11-06 12:49:33.643031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:45.247 [2024-11-06 12:49:33.643052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.247 [2024-11-06 12:49:33.643752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.247 [2024-11-06 12:49:33.643797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.247 [2024-11-06 12:49:33.643931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:45.247 [2024-11-06 12:49:33.643968] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.247 [2024-11-06 12:49:33.643987] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:45.247 [2024-11-06 12:49:33.644025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.247 [2024-11-06 12:49:33.660540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:45.247 spare 00:18:45.247 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.247 12:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:45.247 [2024-11-06 12:49:33.663280] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.184 "name": "raid_bdev1", 00:18:46.184 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:46.184 "strip_size_kb": 0, 00:18:46.184 "state": "online", 00:18:46.184 "raid_level": "raid1", 00:18:46.184 "superblock": true, 00:18:46.184 "num_base_bdevs": 2, 00:18:46.184 "num_base_bdevs_discovered": 2, 00:18:46.184 "num_base_bdevs_operational": 2, 00:18:46.184 "process": { 00:18:46.184 "type": "rebuild", 00:18:46.184 "target": "spare", 00:18:46.184 "progress": { 00:18:46.184 "blocks": 2560, 00:18:46.184 "percent": 32 00:18:46.184 } 00:18:46.184 }, 00:18:46.184 "base_bdevs_list": [ 00:18:46.184 { 00:18:46.184 "name": "spare", 00:18:46.184 "uuid": "c65c118e-831d-5639-8b63-b87ac4724d77", 00:18:46.184 "is_configured": true, 00:18:46.184 "data_offset": 256, 00:18:46.184 "data_size": 7936 00:18:46.184 }, 00:18:46.184 { 00:18:46.184 "name": "BaseBdev2", 00:18:46.184 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:46.184 "is_configured": true, 00:18:46.184 "data_offset": 256, 00:18:46.184 "data_size": 7936 00:18:46.184 } 00:18:46.184 ] 00:18:46.184 }' 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.184 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.184 [2024-11-06 12:49:34.829054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.443 [2024-11-06 12:49:34.874625] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:46.444 [2024-11-06 12:49:34.874768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.444 [2024-11-06 12:49:34.874799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.444 [2024-11-06 12:49:34.874811] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.444 "name": "raid_bdev1", 00:18:46.444 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:46.444 "strip_size_kb": 0, 00:18:46.444 "state": "online", 00:18:46.444 "raid_level": "raid1", 00:18:46.444 "superblock": true, 00:18:46.444 "num_base_bdevs": 2, 00:18:46.444 "num_base_bdevs_discovered": 1, 00:18:46.444 "num_base_bdevs_operational": 1, 00:18:46.444 "base_bdevs_list": [ 00:18:46.444 { 00:18:46.444 "name": null, 00:18:46.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.444 "is_configured": false, 00:18:46.444 "data_offset": 0, 00:18:46.444 "data_size": 7936 00:18:46.444 }, 00:18:46.444 { 00:18:46.444 "name": "BaseBdev2", 00:18:46.444 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:46.444 "is_configured": true, 00:18:46.444 "data_offset": 256, 00:18:46.444 "data_size": 7936 00:18:46.444 } 00:18:46.444 ] 00:18:46.444 }' 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.444 12:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.011 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.012 "name": "raid_bdev1", 00:18:47.012 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:47.012 "strip_size_kb": 0, 00:18:47.012 "state": "online", 00:18:47.012 "raid_level": "raid1", 00:18:47.012 "superblock": true, 00:18:47.012 "num_base_bdevs": 2, 00:18:47.012 "num_base_bdevs_discovered": 1, 00:18:47.012 "num_base_bdevs_operational": 1, 00:18:47.012 "base_bdevs_list": [ 00:18:47.012 { 00:18:47.012 "name": null, 00:18:47.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.012 "is_configured": false, 00:18:47.012 "data_offset": 0, 00:18:47.012 "data_size": 7936 00:18:47.012 }, 00:18:47.012 { 00:18:47.012 "name": "BaseBdev2", 00:18:47.012 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:47.012 "is_configured": true, 00:18:47.012 "data_offset": 256, 00:18:47.012 "data_size": 7936 00:18:47.012 } 00:18:47.012 ] 00:18:47.012 }' 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:47.012 [2024-11-06 12:49:35.600963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:47.012 [2024-11-06 12:49:35.601065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.012 [2024-11-06 12:49:35.601101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:47.012 [2024-11-06 12:49:35.601128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.012 [2024-11-06 12:49:35.601760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.012 [2024-11-06 12:49:35.601798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:47.012 [2024-11-06 12:49:35.601915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:47.012 [2024-11-06 12:49:35.601939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.012 [2024-11-06 12:49:35.601954] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:47.012 [2024-11-06 12:49:35.601969] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:47.012 BaseBdev1 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.012 12:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.389 "name": "raid_bdev1", 00:18:48.389 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:48.389 "strip_size_kb": 0, 00:18:48.389 "state": "online", 00:18:48.389 "raid_level": "raid1", 00:18:48.389 "superblock": true, 00:18:48.389 "num_base_bdevs": 2, 00:18:48.389 "num_base_bdevs_discovered": 1, 00:18:48.389 "num_base_bdevs_operational": 1, 00:18:48.389 "base_bdevs_list": [ 00:18:48.389 { 00:18:48.389 "name": null, 00:18:48.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.389 "is_configured": false, 00:18:48.389 "data_offset": 0, 00:18:48.389 "data_size": 7936 00:18:48.389 }, 00:18:48.389 { 00:18:48.389 "name": "BaseBdev2", 00:18:48.389 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:48.389 "is_configured": true, 00:18:48.389 "data_offset": 256, 00:18:48.389 "data_size": 7936 00:18:48.389 } 00:18:48.389 ] 00:18:48.389 }' 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.389 12:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.668 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.668 "name": "raid_bdev1", 00:18:48.668 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:48.668 "strip_size_kb": 0, 00:18:48.668 "state": "online", 00:18:48.668 "raid_level": "raid1", 00:18:48.668 "superblock": true, 00:18:48.668 "num_base_bdevs": 2, 00:18:48.668 "num_base_bdevs_discovered": 1, 00:18:48.668 "num_base_bdevs_operational": 1, 00:18:48.668 "base_bdevs_list": [ 00:18:48.668 { 00:18:48.668 "name": null, 00:18:48.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.668 "is_configured": false, 00:18:48.668 "data_offset": 0, 00:18:48.668 "data_size": 7936 00:18:48.668 }, 00:18:48.668 { 00:18:48.668 "name": "BaseBdev2", 00:18:48.669 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:48.669 "is_configured": true, 00:18:48.669 "data_offset": 256, 00:18:48.669 "data_size": 7936 00:18:48.669 } 00:18:48.669 ] 00:18:48.669 }' 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:48.669 [2024-11-06 12:49:37.273636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.669 [2024-11-06 12:49:37.274055] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.669 [2024-11-06 12:49:37.274088] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:48.669 request: 00:18:48.669 { 00:18:48.669 "base_bdev": "BaseBdev1", 00:18:48.669 "raid_bdev": "raid_bdev1", 00:18:48.669 "method": "bdev_raid_add_base_bdev", 00:18:48.669 "req_id": 1 00:18:48.669 } 00:18:48.669 Got JSON-RPC error response 00:18:48.669 response: 00:18:48.669 { 00:18:48.669 "code": -22, 00:18:48.669 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:48.669 } 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.669 12:49:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.045 "name": "raid_bdev1", 00:18:50.045 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:50.045 "strip_size_kb": 0, 00:18:50.045 "state": "online", 00:18:50.045 "raid_level": "raid1", 00:18:50.045 "superblock": true, 00:18:50.045 "num_base_bdevs": 2, 00:18:50.045 "num_base_bdevs_discovered": 1, 00:18:50.045 "num_base_bdevs_operational": 1, 00:18:50.045 "base_bdevs_list": [ 00:18:50.045 { 00:18:50.045 "name": null, 00:18:50.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.045 "is_configured": false, 00:18:50.045 "data_offset": 0, 00:18:50.045 "data_size": 7936 00:18:50.045 }, 00:18:50.045 { 00:18:50.045 "name": "BaseBdev2", 00:18:50.045 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:50.045 "is_configured": true, 00:18:50.045 "data_offset": 256, 00:18:50.045 "data_size": 7936 00:18:50.045 } 00:18:50.045 ] 00:18:50.045 }' 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.045 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.304 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.304 "name": "raid_bdev1", 00:18:50.304 "uuid": "fa1b9033-4f9f-49fd-b132-a656d945a762", 00:18:50.304 "strip_size_kb": 0, 00:18:50.304 "state": "online", 00:18:50.304 "raid_level": "raid1", 00:18:50.304 "superblock": true, 00:18:50.304 "num_base_bdevs": 2, 00:18:50.304 "num_base_bdevs_discovered": 1, 00:18:50.304 "num_base_bdevs_operational": 1, 00:18:50.304 "base_bdevs_list": [ 00:18:50.304 { 00:18:50.304 "name": null, 00:18:50.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.305 "is_configured": false, 00:18:50.305 "data_offset": 0, 00:18:50.305 "data_size": 7936 00:18:50.305 }, 00:18:50.305 { 00:18:50.305 "name": "BaseBdev2", 00:18:50.305 "uuid": "132282cc-bf28-5801-86b7-cbbca9e5423f", 00:18:50.305 "is_configured": true, 00:18:50.305 "data_offset": 256, 00:18:50.305 "data_size": 7936 00:18:50.305 } 00:18:50.305 ] 00:18:50.305 }' 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87029 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 87029 ']' 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 87029 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:50.305 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.563 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87029 00:18:50.563 killing process with pid 87029 00:18:50.563 Received shutdown signal, test time was about 60.000000 seconds 00:18:50.563 00:18:50.563 Latency(us) 00:18:50.563 [2024-11-06T12:49:39.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.563 [2024-11-06T12:49:39.220Z] =================================================================================================================== 00:18:50.563 [2024-11-06T12:49:39.221Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.564 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:50.564 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:50.564 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87029' 00:18:50.564 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 87029 00:18:50.564 [2024-11-06 12:49:38.990123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.564 12:49:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 87029 00:18:50.564 [2024-11-06 12:49:38.990336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.564 [2024-11-06 12:49:38.990423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.564 [2024-11-06 12:49:38.990444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:50.822 [2024-11-06 12:49:39.278956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.758 12:49:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:51.758 ************************************ 00:18:51.758 END TEST raid_rebuild_test_sb_4k 00:18:51.758 ************************************ 00:18:51.758 00:18:51.758 real 0m22.033s 00:18:51.758 user 0m29.781s 00:18:51.758 sys 0m2.632s 00:18:51.758 12:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:51.758 12:49:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.016 12:49:40 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:52.016 12:49:40 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:52.016 12:49:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:52.016 12:49:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:52.016 12:49:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.016 ************************************ 00:18:52.016 START TEST raid_state_function_test_sb_md_separate 00:18:52.016 ************************************ 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:52.017 Process raid pid: 87738 00:18:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87738 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87738' 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87738 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87738 ']' 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.017 12:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.017 [2024-11-06 12:49:40.539756] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:18:52.017 [2024-11-06 12:49:40.539924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.276 [2024-11-06 12:49:40.716598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.276 [2024-11-06 12:49:40.866886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.534 [2024-11-06 12:49:41.098322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.534 [2024-11-06 12:49:41.098533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.135 [2024-11-06 12:49:41.511893] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.135 [2024-11-06 12:49:41.511996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.135 [2024-11-06 12:49:41.512015] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.135 [2024-11-06 12:49:41.512033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.135 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.136 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.136 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.136 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.136 "name": "Existed_Raid", 00:18:53.136 "uuid": "30ba2b1b-2c6c-4422-bbd6-fc450ac89917", 00:18:53.136 "strip_size_kb": 0, 00:18:53.136 "state": "configuring", 00:18:53.136 "raid_level": "raid1", 00:18:53.136 "superblock": true, 00:18:53.136 "num_base_bdevs": 2, 00:18:53.136 "num_base_bdevs_discovered": 0, 00:18:53.136 "num_base_bdevs_operational": 2, 00:18:53.136 "base_bdevs_list": [ 00:18:53.136 { 00:18:53.136 "name": "BaseBdev1", 00:18:53.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.136 "is_configured": false, 00:18:53.136 "data_offset": 0, 00:18:53.136 "data_size": 0 00:18:53.136 }, 00:18:53.136 { 00:18:53.136 "name": "BaseBdev2", 00:18:53.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.136 "is_configured": false, 00:18:53.136 "data_offset": 0, 00:18:53.136 "data_size": 0 00:18:53.136 } 00:18:53.136 ] 00:18:53.136 }' 00:18:53.136 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.136 12:49:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.394 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:53.394 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.394 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.394 [2024-11-06 12:49:42.047962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.394 [2024-11-06 12:49:42.048027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.653 [2024-11-06 12:49:42.055923] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.653 [2024-11-06 12:49:42.055991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.653 [2024-11-06 12:49:42.056008] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.653 [2024-11-06 12:49:42.056027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.653 [2024-11-06 12:49:42.104771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.653 BaseBdev1 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.653 [ 00:18:53.653 { 00:18:53.653 "name": "BaseBdev1", 00:18:53.653 "aliases": [ 00:18:53.653 "6bb43893-055a-4180-a19f-449fd982e792" 00:18:53.653 ], 00:18:53.653 "product_name": "Malloc disk", 00:18:53.653 "block_size": 4096, 00:18:53.653 "num_blocks": 8192, 00:18:53.653 "uuid": "6bb43893-055a-4180-a19f-449fd982e792", 00:18:53.653 "md_size": 32, 00:18:53.653 "md_interleave": false, 00:18:53.653 "dif_type": 0, 00:18:53.653 "assigned_rate_limits": { 00:18:53.653 "rw_ios_per_sec": 0, 00:18:53.653 "rw_mbytes_per_sec": 0, 00:18:53.653 "r_mbytes_per_sec": 0, 00:18:53.653 "w_mbytes_per_sec": 0 00:18:53.653 }, 00:18:53.653 "claimed": true, 00:18:53.653 "claim_type": "exclusive_write", 00:18:53.653 "zoned": false, 00:18:53.653 "supported_io_types": { 00:18:53.653 "read": true, 00:18:53.653 "write": true, 00:18:53.653 "unmap": true, 00:18:53.653 "flush": true, 00:18:53.653 "reset": true, 00:18:53.653 "nvme_admin": false, 00:18:53.653 "nvme_io": false, 00:18:53.653 "nvme_io_md": false, 00:18:53.653 "write_zeroes": true, 00:18:53.653 "zcopy": true, 00:18:53.653 "get_zone_info": false, 00:18:53.653 "zone_management": false, 00:18:53.653 "zone_append": false, 00:18:53.653 "compare": false, 00:18:53.653 "compare_and_write": false, 00:18:53.653 "abort": true, 00:18:53.653 "seek_hole": false, 00:18:53.653 "seek_data": false, 00:18:53.653 "copy": true, 00:18:53.653 "nvme_iov_md": false 00:18:53.653 }, 00:18:53.653 "memory_domains": [ 00:18:53.653 { 00:18:53.653 "dma_device_id": "system", 00:18:53.653 "dma_device_type": 1 00:18:53.653 }, 00:18:53.653 { 00:18:53.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.653 "dma_device_type": 2 00:18:53.653 } 00:18:53.653 ], 00:18:53.653 "driver_specific": {} 00:18:53.653 } 00:18:53.653 ] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.653 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.653 "name": "Existed_Raid", 00:18:53.653 "uuid": "c53ce536-7c5e-4528-b755-ba07b483c04f", 00:18:53.653 "strip_size_kb": 0, 00:18:53.653 "state": "configuring", 00:18:53.653 "raid_level": "raid1", 00:18:53.653 "superblock": true, 00:18:53.653 "num_base_bdevs": 2, 00:18:53.653 "num_base_bdevs_discovered": 1, 00:18:53.653 "num_base_bdevs_operational": 2, 00:18:53.654 "base_bdevs_list": [ 00:18:53.654 { 00:18:53.654 "name": "BaseBdev1", 00:18:53.654 "uuid": "6bb43893-055a-4180-a19f-449fd982e792", 00:18:53.654 "is_configured": true, 00:18:53.654 "data_offset": 256, 00:18:53.654 "data_size": 7936 00:18:53.654 }, 00:18:53.654 { 00:18:53.654 "name": "BaseBdev2", 00:18:53.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.654 "is_configured": false, 00:18:53.654 "data_offset": 0, 00:18:53.654 "data_size": 0 00:18:53.654 } 00:18:53.654 ] 00:18:53.654 }' 00:18:53.654 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.654 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.221 [2024-11-06 12:49:42.665083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.221 [2024-11-06 12:49:42.665156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.221 [2024-11-06 12:49:42.677096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.221 [2024-11-06 12:49:42.679902] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.221 [2024-11-06 12:49:42.679975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.221 "name": "Existed_Raid", 00:18:54.221 "uuid": "249ef582-b7f3-4d06-b933-1884a59c6571", 00:18:54.221 "strip_size_kb": 0, 00:18:54.221 "state": "configuring", 00:18:54.221 "raid_level": "raid1", 00:18:54.221 "superblock": true, 00:18:54.221 "num_base_bdevs": 2, 00:18:54.221 "num_base_bdevs_discovered": 1, 00:18:54.221 "num_base_bdevs_operational": 2, 00:18:54.221 "base_bdevs_list": [ 00:18:54.221 { 00:18:54.221 "name": "BaseBdev1", 00:18:54.221 "uuid": "6bb43893-055a-4180-a19f-449fd982e792", 00:18:54.221 "is_configured": true, 00:18:54.221 "data_offset": 256, 00:18:54.221 "data_size": 7936 00:18:54.221 }, 00:18:54.221 { 00:18:54.221 "name": "BaseBdev2", 00:18:54.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.221 "is_configured": false, 00:18:54.221 "data_offset": 0, 00:18:54.221 "data_size": 0 00:18:54.221 } 00:18:54.221 ] 00:18:54.221 }' 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.221 12:49:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:54.789 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.789 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 [2024-11-06 12:49:43.248677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.789 [2024-11-06 12:49:43.249239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:54.789 [2024-11-06 12:49:43.249270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:54.789 [2024-11-06 12:49:43.249376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:54.789 [2024-11-06 12:49:43.249546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:54.789 [2024-11-06 12:49:43.249566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:54.789 BaseBdev2 00:18:54.790 [2024-11-06 12:49:43.249685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.790 [ 00:18:54.790 { 00:18:54.790 "name": "BaseBdev2", 00:18:54.790 "aliases": [ 00:18:54.790 "e2cca4f8-8cf9-4f99-916f-881243fadd31" 00:18:54.790 ], 00:18:54.790 "product_name": "Malloc disk", 00:18:54.790 "block_size": 4096, 00:18:54.790 "num_blocks": 8192, 00:18:54.790 "uuid": "e2cca4f8-8cf9-4f99-916f-881243fadd31", 00:18:54.790 "md_size": 32, 00:18:54.790 "md_interleave": false, 00:18:54.790 "dif_type": 0, 00:18:54.790 "assigned_rate_limits": { 00:18:54.790 "rw_ios_per_sec": 0, 00:18:54.790 "rw_mbytes_per_sec": 0, 00:18:54.790 "r_mbytes_per_sec": 0, 00:18:54.790 "w_mbytes_per_sec": 0 00:18:54.790 }, 00:18:54.790 "claimed": true, 00:18:54.790 "claim_type": "exclusive_write", 00:18:54.790 "zoned": false, 00:18:54.790 "supported_io_types": { 00:18:54.790 "read": true, 00:18:54.790 "write": true, 00:18:54.790 "unmap": true, 00:18:54.790 "flush": true, 00:18:54.790 "reset": true, 00:18:54.790 "nvme_admin": false, 00:18:54.790 "nvme_io": false, 00:18:54.790 "nvme_io_md": false, 00:18:54.790 "write_zeroes": true, 00:18:54.790 "zcopy": true, 00:18:54.790 "get_zone_info": false, 00:18:54.790 "zone_management": false, 00:18:54.790 "zone_append": false, 00:18:54.790 "compare": false, 00:18:54.790 "compare_and_write": false, 00:18:54.790 "abort": true, 00:18:54.790 "seek_hole": false, 00:18:54.790 "seek_data": false, 00:18:54.790 "copy": true, 00:18:54.790 "nvme_iov_md": false 00:18:54.790 }, 00:18:54.790 "memory_domains": [ 00:18:54.790 { 00:18:54.790 "dma_device_id": "system", 00:18:54.790 "dma_device_type": 1 00:18:54.790 }, 00:18:54.790 { 00:18:54.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.790 "dma_device_type": 2 00:18:54.790 } 00:18:54.790 ], 00:18:54.790 "driver_specific": {} 00:18:54.790 } 00:18:54.790 ] 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.790 "name": "Existed_Raid", 00:18:54.790 "uuid": "249ef582-b7f3-4d06-b933-1884a59c6571", 00:18:54.790 "strip_size_kb": 0, 00:18:54.790 "state": "online", 00:18:54.790 "raid_level": "raid1", 00:18:54.790 "superblock": true, 00:18:54.790 "num_base_bdevs": 2, 00:18:54.790 "num_base_bdevs_discovered": 2, 00:18:54.790 "num_base_bdevs_operational": 2, 00:18:54.790 "base_bdevs_list": [ 00:18:54.790 { 00:18:54.790 "name": "BaseBdev1", 00:18:54.790 "uuid": "6bb43893-055a-4180-a19f-449fd982e792", 00:18:54.790 "is_configured": true, 00:18:54.790 "data_offset": 256, 00:18:54.790 "data_size": 7936 00:18:54.790 }, 00:18:54.790 { 00:18:54.790 "name": "BaseBdev2", 00:18:54.790 "uuid": "e2cca4f8-8cf9-4f99-916f-881243fadd31", 00:18:54.790 "is_configured": true, 00:18:54.790 "data_offset": 256, 00:18:54.790 "data_size": 7936 00:18:54.790 } 00:18:54.790 ] 00:18:54.790 }' 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.790 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:55.358 [2024-11-06 12:49:43.793357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.358 "name": "Existed_Raid", 00:18:55.358 "aliases": [ 00:18:55.358 "249ef582-b7f3-4d06-b933-1884a59c6571" 00:18:55.358 ], 00:18:55.358 "product_name": "Raid Volume", 00:18:55.358 "block_size": 4096, 00:18:55.358 "num_blocks": 7936, 00:18:55.358 "uuid": "249ef582-b7f3-4d06-b933-1884a59c6571", 00:18:55.358 "md_size": 32, 00:18:55.358 "md_interleave": false, 00:18:55.358 "dif_type": 0, 00:18:55.358 "assigned_rate_limits": { 00:18:55.358 "rw_ios_per_sec": 0, 00:18:55.358 "rw_mbytes_per_sec": 0, 00:18:55.358 "r_mbytes_per_sec": 0, 00:18:55.358 "w_mbytes_per_sec": 0 00:18:55.358 }, 00:18:55.358 "claimed": false, 00:18:55.358 "zoned": false, 00:18:55.358 "supported_io_types": { 00:18:55.358 "read": true, 00:18:55.358 "write": true, 00:18:55.358 "unmap": false, 00:18:55.358 "flush": false, 00:18:55.358 "reset": true, 00:18:55.358 "nvme_admin": false, 00:18:55.358 "nvme_io": false, 00:18:55.358 "nvme_io_md": false, 00:18:55.358 "write_zeroes": true, 00:18:55.358 "zcopy": false, 00:18:55.358 "get_zone_info": false, 00:18:55.358 "zone_management": false, 00:18:55.358 "zone_append": false, 00:18:55.358 "compare": false, 00:18:55.358 "compare_and_write": false, 00:18:55.358 "abort": false, 00:18:55.358 "seek_hole": false, 00:18:55.358 "seek_data": false, 00:18:55.358 "copy": false, 00:18:55.358 "nvme_iov_md": false 00:18:55.358 }, 00:18:55.358 "memory_domains": [ 00:18:55.358 { 00:18:55.358 "dma_device_id": "system", 00:18:55.358 "dma_device_type": 1 00:18:55.358 }, 00:18:55.358 { 00:18:55.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.358 "dma_device_type": 2 00:18:55.358 }, 00:18:55.358 { 00:18:55.358 "dma_device_id": "system", 00:18:55.358 "dma_device_type": 1 00:18:55.358 }, 00:18:55.358 { 00:18:55.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.358 "dma_device_type": 2 00:18:55.358 } 00:18:55.358 ], 00:18:55.358 "driver_specific": { 00:18:55.358 "raid": { 00:18:55.358 "uuid": "249ef582-b7f3-4d06-b933-1884a59c6571", 00:18:55.358 "strip_size_kb": 0, 00:18:55.358 "state": "online", 00:18:55.358 "raid_level": "raid1", 00:18:55.358 "superblock": true, 00:18:55.358 "num_base_bdevs": 2, 00:18:55.358 "num_base_bdevs_discovered": 2, 00:18:55.358 "num_base_bdevs_operational": 2, 00:18:55.358 "base_bdevs_list": [ 00:18:55.358 { 00:18:55.358 "name": "BaseBdev1", 00:18:55.358 "uuid": "6bb43893-055a-4180-a19f-449fd982e792", 00:18:55.358 "is_configured": true, 00:18:55.358 "data_offset": 256, 00:18:55.358 "data_size": 7936 00:18:55.358 }, 00:18:55.358 { 00:18:55.358 "name": "BaseBdev2", 00:18:55.358 "uuid": "e2cca4f8-8cf9-4f99-916f-881243fadd31", 00:18:55.358 "is_configured": true, 00:18:55.358 "data_offset": 256, 00:18:55.358 "data_size": 7936 00:18:55.358 } 00:18:55.358 ] 00:18:55.358 } 00:18:55.358 } 00:18:55.358 }' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:55.358 BaseBdev2' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:55.358 12:49:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.617 [2024-11-06 12:49:44.053038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.617 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.618 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.618 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.618 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.618 "name": "Existed_Raid", 00:18:55.618 "uuid": "249ef582-b7f3-4d06-b933-1884a59c6571", 00:18:55.618 "strip_size_kb": 0, 00:18:55.618 "state": "online", 00:18:55.618 "raid_level": "raid1", 00:18:55.618 "superblock": true, 00:18:55.618 "num_base_bdevs": 2, 00:18:55.618 "num_base_bdevs_discovered": 1, 00:18:55.618 "num_base_bdevs_operational": 1, 00:18:55.618 "base_bdevs_list": [ 00:18:55.618 { 00:18:55.618 "name": null, 00:18:55.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.618 "is_configured": false, 00:18:55.618 "data_offset": 0, 00:18:55.618 "data_size": 7936 00:18:55.618 }, 00:18:55.618 { 00:18:55.618 "name": "BaseBdev2", 00:18:55.618 "uuid": "e2cca4f8-8cf9-4f99-916f-881243fadd31", 00:18:55.618 "is_configured": true, 00:18:55.618 "data_offset": 256, 00:18:55.618 "data_size": 7936 00:18:55.618 } 00:18:55.618 ] 00:18:55.618 }' 00:18:55.618 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.618 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 [2024-11-06 12:49:44.717423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:56.185 [2024-11-06 12:49:44.717600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.185 [2024-11-06 12:49:44.820681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.185 [2024-11-06 12:49:44.820987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.185 [2024-11-06 12:49:44.821175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87738 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87738 ']' 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87738 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87738 00:18:56.445 killing process with pid 87738 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87738' 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87738 00:18:56.445 [2024-11-06 12:49:44.909691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.445 12:49:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87738 00:18:56.445 [2024-11-06 12:49:44.925794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.406 12:49:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:57.406 00:18:57.406 real 0m5.607s 00:18:57.406 user 0m8.333s 00:18:57.406 sys 0m0.876s 00:18:57.406 12:49:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.406 12:49:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.406 ************************************ 00:18:57.406 END TEST raid_state_function_test_sb_md_separate 00:18:57.406 ************************************ 00:18:57.665 12:49:46 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:57.665 12:49:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:57.665 12:49:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.665 12:49:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.665 ************************************ 00:18:57.665 START TEST raid_superblock_test_md_separate 00:18:57.665 ************************************ 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87986 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87986 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87986 ']' 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.665 12:49:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.665 [2024-11-06 12:49:46.215731] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:18:57.665 [2024-11-06 12:49:46.215912] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87986 ] 00:18:57.924 [2024-11-06 12:49:46.404957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.924 [2024-11-06 12:49:46.560916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.182 [2024-11-06 12:49:46.787824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.182 [2024-11-06 12:49:46.788064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.751 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.751 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:58.751 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 malloc1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 [2024-11-06 12:49:47.276705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.752 [2024-11-06 12:49:47.276986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.752 [2024-11-06 12:49:47.277039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:58.752 [2024-11-06 12:49:47.277057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.752 [2024-11-06 12:49:47.280351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.752 [2024-11-06 12:49:47.280396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.752 pt1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 malloc2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 [2024-11-06 12:49:47.337870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.752 [2024-11-06 12:49:47.338114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.752 [2024-11-06 12:49:47.338224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:58.752 [2024-11-06 12:49:47.338435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.752 [2024-11-06 12:49:47.341279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.752 [2024-11-06 12:49:47.341476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.752 pt2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 [2024-11-06 12:49:47.349944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.752 [2024-11-06 12:49:47.352792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.752 [2024-11-06 12:49:47.353033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:58.752 [2024-11-06 12:49:47.353060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.752 [2024-11-06 12:49:47.353231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:58.752 [2024-11-06 12:49:47.353433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:58.752 [2024-11-06 12:49:47.353453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:58.752 [2024-11-06 12:49:47.353608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.752 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.011 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.011 "name": "raid_bdev1", 00:18:59.011 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:18:59.011 "strip_size_kb": 0, 00:18:59.011 "state": "online", 00:18:59.011 "raid_level": "raid1", 00:18:59.011 "superblock": true, 00:18:59.011 "num_base_bdevs": 2, 00:18:59.011 "num_base_bdevs_discovered": 2, 00:18:59.011 "num_base_bdevs_operational": 2, 00:18:59.011 "base_bdevs_list": [ 00:18:59.011 { 00:18:59.011 "name": "pt1", 00:18:59.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.011 "is_configured": true, 00:18:59.011 "data_offset": 256, 00:18:59.011 "data_size": 7936 00:18:59.011 }, 00:18:59.011 { 00:18:59.011 "name": "pt2", 00:18:59.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.011 "is_configured": true, 00:18:59.011 "data_offset": 256, 00:18:59.011 "data_size": 7936 00:18:59.011 } 00:18:59.011 ] 00:18:59.011 }' 00:18:59.011 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.011 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 [2024-11-06 12:49:47.886554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.270 12:49:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.528 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.528 "name": "raid_bdev1", 00:18:59.528 "aliases": [ 00:18:59.528 "689e70d8-4e09-46cc-ae4d-d7a3cf231847" 00:18:59.528 ], 00:18:59.528 "product_name": "Raid Volume", 00:18:59.528 "block_size": 4096, 00:18:59.528 "num_blocks": 7936, 00:18:59.528 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:18:59.528 "md_size": 32, 00:18:59.528 "md_interleave": false, 00:18:59.528 "dif_type": 0, 00:18:59.528 "assigned_rate_limits": { 00:18:59.528 "rw_ios_per_sec": 0, 00:18:59.528 "rw_mbytes_per_sec": 0, 00:18:59.528 "r_mbytes_per_sec": 0, 00:18:59.528 "w_mbytes_per_sec": 0 00:18:59.528 }, 00:18:59.528 "claimed": false, 00:18:59.528 "zoned": false, 00:18:59.528 "supported_io_types": { 00:18:59.528 "read": true, 00:18:59.528 "write": true, 00:18:59.528 "unmap": false, 00:18:59.528 "flush": false, 00:18:59.528 "reset": true, 00:18:59.528 "nvme_admin": false, 00:18:59.528 "nvme_io": false, 00:18:59.528 "nvme_io_md": false, 00:18:59.528 "write_zeroes": true, 00:18:59.528 "zcopy": false, 00:18:59.528 "get_zone_info": false, 00:18:59.528 "zone_management": false, 00:18:59.528 "zone_append": false, 00:18:59.528 "compare": false, 00:18:59.528 "compare_and_write": false, 00:18:59.528 "abort": false, 00:18:59.528 "seek_hole": false, 00:18:59.528 "seek_data": false, 00:18:59.528 "copy": false, 00:18:59.528 "nvme_iov_md": false 00:18:59.528 }, 00:18:59.528 "memory_domains": [ 00:18:59.528 { 00:18:59.528 "dma_device_id": "system", 00:18:59.528 "dma_device_type": 1 00:18:59.528 }, 00:18:59.528 { 00:18:59.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.528 "dma_device_type": 2 00:18:59.528 }, 00:18:59.528 { 00:18:59.528 "dma_device_id": "system", 00:18:59.528 "dma_device_type": 1 00:18:59.528 }, 00:18:59.528 { 00:18:59.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.528 "dma_device_type": 2 00:18:59.528 } 00:18:59.528 ], 00:18:59.528 "driver_specific": { 00:18:59.528 "raid": { 00:18:59.528 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:18:59.528 "strip_size_kb": 0, 00:18:59.528 "state": "online", 00:18:59.528 "raid_level": "raid1", 00:18:59.528 "superblock": true, 00:18:59.528 "num_base_bdevs": 2, 00:18:59.528 "num_base_bdevs_discovered": 2, 00:18:59.528 "num_base_bdevs_operational": 2, 00:18:59.528 "base_bdevs_list": [ 00:18:59.528 { 00:18:59.528 "name": "pt1", 00:18:59.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.528 "is_configured": true, 00:18:59.528 "data_offset": 256, 00:18:59.528 "data_size": 7936 00:18:59.528 }, 00:18:59.528 { 00:18:59.528 "name": "pt2", 00:18:59.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.528 "is_configured": true, 00:18:59.528 "data_offset": 256, 00:18:59.528 "data_size": 7936 00:18:59.528 } 00:18:59.528 ] 00:18:59.528 } 00:18:59.529 } 00:18:59.529 }' 00:18:59.529 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.529 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:59.529 pt2' 00:18:59.529 12:49:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.529 [2024-11-06 12:49:48.158583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.529 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.788 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=689e70d8-4e09-46cc-ae4d-d7a3cf231847 00:18:59.788 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 689e70d8-4e09-46cc-ae4d-d7a3cf231847 ']' 00:18:59.788 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:59.788 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.788 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.788 [2024-11-06 12:49:48.210138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.789 [2024-11-06 12:49:48.210170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.789 [2024-11-06 12:49:48.210351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.789 [2024-11-06 12:49:48.210442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.789 [2024-11-06 12:49:48.210463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 [2024-11-06 12:49:48.362301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:59.789 [2024-11-06 12:49:48.365240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:59.789 [2024-11-06 12:49:48.365526] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:59.789 [2024-11-06 12:49:48.365827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:59.789 [2024-11-06 12:49:48.366019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.789 [2024-11-06 12:49:48.366069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:59.789 request: 00:18:59.789 { 00:18:59.789 "name": "raid_bdev1", 00:18:59.789 "raid_level": "raid1", 00:18:59.789 "base_bdevs": [ 00:18:59.789 "malloc1", 00:18:59.789 "malloc2" 00:18:59.789 ], 00:18:59.789 "superblock": false, 00:18:59.789 "method": "bdev_raid_create", 00:18:59.789 "req_id": 1 00:18:59.789 } 00:18:59.789 Got JSON-RPC error response 00:18:59.789 response: 00:18:59.789 { 00:18:59.789 "code": -17, 00:18:59.789 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:59.789 } 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.789 [2024-11-06 12:49:48.430558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:59.789 [2024-11-06 12:49:48.430875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.789 [2024-11-06 12:49:48.430915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:59.789 [2024-11-06 12:49:48.430935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.789 [2024-11-06 12:49:48.433949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.789 [2024-11-06 12:49:48.434010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:59.789 [2024-11-06 12:49:48.434093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:59.789 [2024-11-06 12:49:48.434171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.789 pt1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.789 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.048 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.048 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.048 "name": "raid_bdev1", 00:19:00.048 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:00.048 "strip_size_kb": 0, 00:19:00.048 "state": "configuring", 00:19:00.048 "raid_level": "raid1", 00:19:00.048 "superblock": true, 00:19:00.048 "num_base_bdevs": 2, 00:19:00.048 "num_base_bdevs_discovered": 1, 00:19:00.048 "num_base_bdevs_operational": 2, 00:19:00.048 "base_bdevs_list": [ 00:19:00.048 { 00:19:00.048 "name": "pt1", 00:19:00.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.048 "is_configured": true, 00:19:00.048 "data_offset": 256, 00:19:00.048 "data_size": 7936 00:19:00.048 }, 00:19:00.048 { 00:19:00.048 "name": null, 00:19:00.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.048 "is_configured": false, 00:19:00.048 "data_offset": 256, 00:19:00.048 "data_size": 7936 00:19:00.048 } 00:19:00.048 ] 00:19:00.048 }' 00:19:00.048 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.048 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.307 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:00.307 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:00.307 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:00.307 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:00.307 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.307 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.307 [2024-11-06 12:49:48.934672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:00.307 [2024-11-06 12:49:48.934982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.307 [2024-11-06 12:49:48.935028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:00.307 [2024-11-06 12:49:48.935048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.307 [2024-11-06 12:49:48.935417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.307 [2024-11-06 12:49:48.935452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:00.307 [2024-11-06 12:49:48.935530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:00.307 [2024-11-06 12:49:48.935569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.307 [2024-11-06 12:49:48.935726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:00.307 [2024-11-06 12:49:48.935747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:00.307 [2024-11-06 12:49:48.935840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:00.308 [2024-11-06 12:49:48.935994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:00.308 [2024-11-06 12:49:48.936009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:00.308 [2024-11-06 12:49:48.936141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.308 pt2 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.308 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.566 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.566 "name": "raid_bdev1", 00:19:00.566 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:00.566 "strip_size_kb": 0, 00:19:00.566 "state": "online", 00:19:00.566 "raid_level": "raid1", 00:19:00.566 "superblock": true, 00:19:00.566 "num_base_bdevs": 2, 00:19:00.566 "num_base_bdevs_discovered": 2, 00:19:00.566 "num_base_bdevs_operational": 2, 00:19:00.566 "base_bdevs_list": [ 00:19:00.566 { 00:19:00.566 "name": "pt1", 00:19:00.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:00.566 "is_configured": true, 00:19:00.566 "data_offset": 256, 00:19:00.566 "data_size": 7936 00:19:00.566 }, 00:19:00.566 { 00:19:00.566 "name": "pt2", 00:19:00.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.566 "is_configured": true, 00:19:00.566 "data_offset": 256, 00:19:00.566 "data_size": 7936 00:19:00.566 } 00:19:00.566 ] 00:19:00.566 }' 00:19:00.566 12:49:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.566 12:49:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.826 [2024-11-06 12:49:49.447177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.826 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.085 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.085 "name": "raid_bdev1", 00:19:01.085 "aliases": [ 00:19:01.085 "689e70d8-4e09-46cc-ae4d-d7a3cf231847" 00:19:01.085 ], 00:19:01.085 "product_name": "Raid Volume", 00:19:01.085 "block_size": 4096, 00:19:01.085 "num_blocks": 7936, 00:19:01.085 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:01.085 "md_size": 32, 00:19:01.085 "md_interleave": false, 00:19:01.085 "dif_type": 0, 00:19:01.085 "assigned_rate_limits": { 00:19:01.085 "rw_ios_per_sec": 0, 00:19:01.085 "rw_mbytes_per_sec": 0, 00:19:01.085 "r_mbytes_per_sec": 0, 00:19:01.085 "w_mbytes_per_sec": 0 00:19:01.085 }, 00:19:01.085 "claimed": false, 00:19:01.085 "zoned": false, 00:19:01.085 "supported_io_types": { 00:19:01.085 "read": true, 00:19:01.085 "write": true, 00:19:01.085 "unmap": false, 00:19:01.085 "flush": false, 00:19:01.085 "reset": true, 00:19:01.085 "nvme_admin": false, 00:19:01.085 "nvme_io": false, 00:19:01.085 "nvme_io_md": false, 00:19:01.085 "write_zeroes": true, 00:19:01.085 "zcopy": false, 00:19:01.085 "get_zone_info": false, 00:19:01.085 "zone_management": false, 00:19:01.085 "zone_append": false, 00:19:01.085 "compare": false, 00:19:01.085 "compare_and_write": false, 00:19:01.085 "abort": false, 00:19:01.085 "seek_hole": false, 00:19:01.085 "seek_data": false, 00:19:01.085 "copy": false, 00:19:01.085 "nvme_iov_md": false 00:19:01.085 }, 00:19:01.085 "memory_domains": [ 00:19:01.085 { 00:19:01.085 "dma_device_id": "system", 00:19:01.085 "dma_device_type": 1 00:19:01.085 }, 00:19:01.085 { 00:19:01.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.085 "dma_device_type": 2 00:19:01.085 }, 00:19:01.085 { 00:19:01.085 "dma_device_id": "system", 00:19:01.085 "dma_device_type": 1 00:19:01.085 }, 00:19:01.085 { 00:19:01.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.085 "dma_device_type": 2 00:19:01.085 } 00:19:01.085 ], 00:19:01.085 "driver_specific": { 00:19:01.085 "raid": { 00:19:01.085 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:01.085 "strip_size_kb": 0, 00:19:01.085 "state": "online", 00:19:01.085 "raid_level": "raid1", 00:19:01.085 "superblock": true, 00:19:01.085 "num_base_bdevs": 2, 00:19:01.085 "num_base_bdevs_discovered": 2, 00:19:01.085 "num_base_bdevs_operational": 2, 00:19:01.085 "base_bdevs_list": [ 00:19:01.085 { 00:19:01.085 "name": "pt1", 00:19:01.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.085 "is_configured": true, 00:19:01.085 "data_offset": 256, 00:19:01.085 "data_size": 7936 00:19:01.085 }, 00:19:01.085 { 00:19:01.085 "name": "pt2", 00:19:01.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.085 "is_configured": true, 00:19:01.085 "data_offset": 256, 00:19:01.085 "data_size": 7936 00:19:01.085 } 00:19:01.085 ] 00:19:01.085 } 00:19:01.085 } 00:19:01.085 }' 00:19:01.085 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:01.086 pt2' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.086 [2024-11-06 12:49:49.715217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.086 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 689e70d8-4e09-46cc-ae4d-d7a3cf231847 '!=' 689e70d8-4e09-46cc-ae4d-d7a3cf231847 ']' 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.344 [2024-11-06 12:49:49.774896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.344 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.344 "name": "raid_bdev1", 00:19:01.344 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:01.344 "strip_size_kb": 0, 00:19:01.344 "state": "online", 00:19:01.344 "raid_level": "raid1", 00:19:01.344 "superblock": true, 00:19:01.344 "num_base_bdevs": 2, 00:19:01.344 "num_base_bdevs_discovered": 1, 00:19:01.344 "num_base_bdevs_operational": 1, 00:19:01.344 "base_bdevs_list": [ 00:19:01.344 { 00:19:01.344 "name": null, 00:19:01.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.344 "is_configured": false, 00:19:01.344 "data_offset": 0, 00:19:01.344 "data_size": 7936 00:19:01.344 }, 00:19:01.344 { 00:19:01.344 "name": "pt2", 00:19:01.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.344 "is_configured": true, 00:19:01.344 "data_offset": 256, 00:19:01.344 "data_size": 7936 00:19:01.344 } 00:19:01.345 ] 00:19:01.345 }' 00:19:01.345 12:49:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.345 12:49:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.913 [2024-11-06 12:49:50.283081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.913 [2024-11-06 12:49:50.283120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.913 [2024-11-06 12:49:50.283262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.913 [2024-11-06 12:49:50.283340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.913 [2024-11-06 12:49:50.283362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.913 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 [2024-11-06 12:49:50.359077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.914 [2024-11-06 12:49:50.359165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.914 [2024-11-06 12:49:50.359208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:01.914 [2024-11-06 12:49:50.359232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.914 [2024-11-06 12:49:50.362189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.914 [2024-11-06 12:49:50.362288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.914 [2024-11-06 12:49:50.362372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:01.914 [2024-11-06 12:49:50.362442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.914 [2024-11-06 12:49:50.362566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:01.914 [2024-11-06 12:49:50.362596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:01.914 [2024-11-06 12:49:50.362693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:01.914 [2024-11-06 12:49:50.362851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:01.914 [2024-11-06 12:49:50.362866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:01.914 pt2 00:19:01.914 [2024-11-06 12:49:50.363039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.914 "name": "raid_bdev1", 00:19:01.914 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:01.914 "strip_size_kb": 0, 00:19:01.914 "state": "online", 00:19:01.914 "raid_level": "raid1", 00:19:01.914 "superblock": true, 00:19:01.914 "num_base_bdevs": 2, 00:19:01.914 "num_base_bdevs_discovered": 1, 00:19:01.914 "num_base_bdevs_operational": 1, 00:19:01.914 "base_bdevs_list": [ 00:19:01.914 { 00:19:01.914 "name": null, 00:19:01.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.914 "is_configured": false, 00:19:01.914 "data_offset": 256, 00:19:01.914 "data_size": 7936 00:19:01.914 }, 00:19:01.914 { 00:19:01.914 "name": "pt2", 00:19:01.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.914 "is_configured": true, 00:19:01.914 "data_offset": 256, 00:19:01.914 "data_size": 7936 00:19:01.914 } 00:19:01.914 ] 00:19:01.914 }' 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.914 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.482 [2024-11-06 12:49:50.891326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.482 [2024-11-06 12:49:50.891399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.482 [2024-11-06 12:49:50.891522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.482 [2024-11-06 12:49:50.891604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.482 [2024-11-06 12:49:50.891620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:02.482 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.483 [2024-11-06 12:49:50.955313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.483 [2024-11-06 12:49:50.955517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.483 [2024-11-06 12:49:50.955668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:02.483 [2024-11-06 12:49:50.955821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.483 [2024-11-06 12:49:50.958806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.483 [2024-11-06 12:49:50.958992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.483 [2024-11-06 12:49:50.959078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:02.483 [2024-11-06 12:49:50.959139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.483 [2024-11-06 12:49:50.959359] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:02.483 [2024-11-06 12:49:50.959390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.483 [2024-11-06 12:49:50.959414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:02.483 [2024-11-06 12:49:50.959494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.483 pt1 00:19:02.483 [2024-11-06 12:49:50.959638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:02.483 [2024-11-06 12:49:50.959655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.483 [2024-11-06 12:49:50.959749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:02.483 [2024-11-06 12:49:50.959905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:02.483 [2024-11-06 12:49:50.959925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:02.483 [2024-11-06 12:49:50.960059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.483 12:49:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.483 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.483 "name": "raid_bdev1", 00:19:02.483 "uuid": "689e70d8-4e09-46cc-ae4d-d7a3cf231847", 00:19:02.483 "strip_size_kb": 0, 00:19:02.483 "state": "online", 00:19:02.483 "raid_level": "raid1", 00:19:02.483 "superblock": true, 00:19:02.483 "num_base_bdevs": 2, 00:19:02.483 "num_base_bdevs_discovered": 1, 00:19:02.483 "num_base_bdevs_operational": 1, 00:19:02.483 "base_bdevs_list": [ 00:19:02.483 { 00:19:02.483 "name": null, 00:19:02.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.483 "is_configured": false, 00:19:02.483 "data_offset": 256, 00:19:02.483 "data_size": 7936 00:19:02.483 }, 00:19:02.483 { 00:19:02.483 "name": "pt2", 00:19:02.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.483 "is_configured": true, 00:19:02.483 "data_offset": 256, 00:19:02.483 "data_size": 7936 00:19:02.483 } 00:19:02.483 ] 00:19:02.483 }' 00:19:02.483 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.483 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.051 [2024-11-06 12:49:51.488216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 689e70d8-4e09-46cc-ae4d-d7a3cf231847 '!=' 689e70d8-4e09-46cc-ae4d-d7a3cf231847 ']' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87986 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87986 ']' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87986 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87986 00:19:03.051 killing process with pid 87986 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87986' 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87986 00:19:03.051 [2024-11-06 12:49:51.581746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.051 12:49:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87986 00:19:03.051 [2024-11-06 12:49:51.581877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.051 [2024-11-06 12:49:51.581953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.051 [2024-11-06 12:49:51.581981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:03.309 [2024-11-06 12:49:51.792387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.687 ************************************ 00:19:04.687 END TEST raid_superblock_test_md_separate 00:19:04.687 ************************************ 00:19:04.687 12:49:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:04.687 00:19:04.687 real 0m6.841s 00:19:04.687 user 0m10.659s 00:19:04.687 sys 0m1.076s 00:19:04.687 12:49:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:04.687 12:49:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.687 12:49:52 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:04.687 12:49:52 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:04.687 12:49:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:04.687 12:49:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:04.687 12:49:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.687 ************************************ 00:19:04.687 START TEST raid_rebuild_test_sb_md_separate 00:19:04.687 ************************************ 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:04.687 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:04.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88320 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88320 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88320 ']' 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.688 12:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.688 [2024-11-06 12:49:53.116486] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:19:04.688 [2024-11-06 12:49:53.116962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:04.688 Zero copy mechanism will not be used. 00:19:04.688 -allocations --file-prefix=spdk_pid88320 ] 00:19:04.688 [2024-11-06 12:49:53.300545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.946 [2024-11-06 12:49:53.460568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.204 [2024-11-06 12:49:53.684922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.204 [2024-11-06 12:49:53.685337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.464 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:05.464 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:19:05.464 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.464 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:05.464 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.464 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.723 BaseBdev1_malloc 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 [2024-11-06 12:49:54.154719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.724 [2024-11-06 12:49:54.154939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.724 [2024-11-06 12:49:54.155016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:05.724 [2024-11-06 12:49:54.155051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.724 [2024-11-06 12:49:54.158651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.724 [2024-11-06 12:49:54.158718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.724 BaseBdev1 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 BaseBdev2_malloc 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 [2024-11-06 12:49:54.220113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:05.724 [2024-11-06 12:49:54.220269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.724 [2024-11-06 12:49:54.220341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:05.724 [2024-11-06 12:49:54.220371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.724 [2024-11-06 12:49:54.223613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.724 [2024-11-06 12:49:54.223684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:05.724 BaseBdev2 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 spare_malloc 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 spare_delay 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 [2024-11-06 12:49:54.303350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.724 [2024-11-06 12:49:54.303452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.724 [2024-11-06 12:49:54.303498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:05.724 [2024-11-06 12:49:54.303527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.724 [2024-11-06 12:49:54.306752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.724 [2024-11-06 12:49:54.306805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.724 spare 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 [2024-11-06 12:49:54.315659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.724 [2024-11-06 12:49:54.318795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.724 [2024-11-06 12:49:54.319014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:05.724 [2024-11-06 12:49:54.319037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:05.724 [2024-11-06 12:49:54.319124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:05.724 [2024-11-06 12:49:54.319598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:05.724 [2024-11-06 12:49:54.319780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:05.724 [2024-11-06 12:49:54.320030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.724 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.983 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.983 "name": "raid_bdev1", 00:19:05.983 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:05.983 "strip_size_kb": 0, 00:19:05.983 "state": "online", 00:19:05.983 "raid_level": "raid1", 00:19:05.983 "superblock": true, 00:19:05.983 "num_base_bdevs": 2, 00:19:05.983 "num_base_bdevs_discovered": 2, 00:19:05.983 "num_base_bdevs_operational": 2, 00:19:05.983 "base_bdevs_list": [ 00:19:05.983 { 00:19:05.983 "name": "BaseBdev1", 00:19:05.983 "uuid": "bf4d288c-d1c7-5542-a856-9658c97414bc", 00:19:05.983 "is_configured": true, 00:19:05.983 "data_offset": 256, 00:19:05.983 "data_size": 7936 00:19:05.983 }, 00:19:05.983 { 00:19:05.983 "name": "BaseBdev2", 00:19:05.983 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:05.983 "is_configured": true, 00:19:05.983 "data_offset": 256, 00:19:05.983 "data_size": 7936 00:19:05.983 } 00:19:05.983 ] 00:19:05.983 }' 00:19:05.983 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.983 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.243 [2024-11-06 12:49:54.852702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.243 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.502 12:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:06.761 [2024-11-06 12:49:55.244505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:06.761 /dev/nbd0 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.761 1+0 records in 00:19:06.761 1+0 records out 00:19:06.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706574 s, 5.8 MB/s 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:06.761 12:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:07.698 7936+0 records in 00:19:07.698 7936+0 records out 00:19:07.698 32505856 bytes (33 MB, 31 MiB) copied, 0.959448 s, 33.9 MB/s 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.698 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.265 [2024-11-06 12:49:56.624334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.265 [2024-11-06 12:49:56.636462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.265 "name": "raid_bdev1", 00:19:08.265 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:08.265 "strip_size_kb": 0, 00:19:08.265 "state": "online", 00:19:08.265 "raid_level": "raid1", 00:19:08.265 "superblock": true, 00:19:08.265 "num_base_bdevs": 2, 00:19:08.265 "num_base_bdevs_discovered": 1, 00:19:08.265 "num_base_bdevs_operational": 1, 00:19:08.265 "base_bdevs_list": [ 00:19:08.265 { 00:19:08.265 "name": null, 00:19:08.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.265 "is_configured": false, 00:19:08.265 "data_offset": 0, 00:19:08.265 "data_size": 7936 00:19:08.265 }, 00:19:08.265 { 00:19:08.265 "name": "BaseBdev2", 00:19:08.265 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:08.265 "is_configured": true, 00:19:08.265 "data_offset": 256, 00:19:08.265 "data_size": 7936 00:19:08.265 } 00:19:08.265 ] 00:19:08.265 }' 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.265 12:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.524 12:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.524 12:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.524 12:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.524 [2024-11-06 12:49:57.152681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.524 [2024-11-06 12:49:57.167117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:08.524 12:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.524 12:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:08.524 [2024-11-06 12:49:57.169866] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.901 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.901 "name": "raid_bdev1", 00:19:09.901 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:09.901 "strip_size_kb": 0, 00:19:09.901 "state": "online", 00:19:09.901 "raid_level": "raid1", 00:19:09.901 "superblock": true, 00:19:09.901 "num_base_bdevs": 2, 00:19:09.901 "num_base_bdevs_discovered": 2, 00:19:09.901 "num_base_bdevs_operational": 2, 00:19:09.901 "process": { 00:19:09.901 "type": "rebuild", 00:19:09.901 "target": "spare", 00:19:09.901 "progress": { 00:19:09.901 "blocks": 2560, 00:19:09.901 "percent": 32 00:19:09.901 } 00:19:09.901 }, 00:19:09.901 "base_bdevs_list": [ 00:19:09.901 { 00:19:09.901 "name": "spare", 00:19:09.901 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:09.901 "is_configured": true, 00:19:09.901 "data_offset": 256, 00:19:09.901 "data_size": 7936 00:19:09.901 }, 00:19:09.901 { 00:19:09.901 "name": "BaseBdev2", 00:19:09.901 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:09.901 "is_configured": true, 00:19:09.901 "data_offset": 256, 00:19:09.901 "data_size": 7936 00:19:09.901 } 00:19:09.901 ] 00:19:09.901 }' 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.902 [2024-11-06 12:49:58.324364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.902 [2024-11-06 12:49:58.382205] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:09.902 [2024-11-06 12:49:58.382331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.902 [2024-11-06 12:49:58.382368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.902 [2024-11-06 12:49:58.382385] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.902 "name": "raid_bdev1", 00:19:09.902 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:09.902 "strip_size_kb": 0, 00:19:09.902 "state": "online", 00:19:09.902 "raid_level": "raid1", 00:19:09.902 "superblock": true, 00:19:09.902 "num_base_bdevs": 2, 00:19:09.902 "num_base_bdevs_discovered": 1, 00:19:09.902 "num_base_bdevs_operational": 1, 00:19:09.902 "base_bdevs_list": [ 00:19:09.902 { 00:19:09.902 "name": null, 00:19:09.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.902 "is_configured": false, 00:19:09.902 "data_offset": 0, 00:19:09.902 "data_size": 7936 00:19:09.902 }, 00:19:09.902 { 00:19:09.902 "name": "BaseBdev2", 00:19:09.902 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:09.902 "is_configured": true, 00:19:09.902 "data_offset": 256, 00:19:09.902 "data_size": 7936 00:19:09.902 } 00:19:09.902 ] 00:19:09.902 }' 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.902 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.469 "name": "raid_bdev1", 00:19:10.469 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:10.469 "strip_size_kb": 0, 00:19:10.469 "state": "online", 00:19:10.469 "raid_level": "raid1", 00:19:10.469 "superblock": true, 00:19:10.469 "num_base_bdevs": 2, 00:19:10.469 "num_base_bdevs_discovered": 1, 00:19:10.469 "num_base_bdevs_operational": 1, 00:19:10.469 "base_bdevs_list": [ 00:19:10.469 { 00:19:10.469 "name": null, 00:19:10.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.469 "is_configured": false, 00:19:10.469 "data_offset": 0, 00:19:10.469 "data_size": 7936 00:19:10.469 }, 00:19:10.469 { 00:19:10.469 "name": "BaseBdev2", 00:19:10.469 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:10.469 "is_configured": true, 00:19:10.469 "data_offset": 256, 00:19:10.469 "data_size": 7936 00:19:10.469 } 00:19:10.469 ] 00:19:10.469 }' 00:19:10.469 12:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.469 [2024-11-06 12:49:59.078025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.469 [2024-11-06 12:49:59.091761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.469 12:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:10.469 [2024-11-06 12:49:59.094582] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.845 "name": "raid_bdev1", 00:19:11.845 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:11.845 "strip_size_kb": 0, 00:19:11.845 "state": "online", 00:19:11.845 "raid_level": "raid1", 00:19:11.845 "superblock": true, 00:19:11.845 "num_base_bdevs": 2, 00:19:11.845 "num_base_bdevs_discovered": 2, 00:19:11.845 "num_base_bdevs_operational": 2, 00:19:11.845 "process": { 00:19:11.845 "type": "rebuild", 00:19:11.845 "target": "spare", 00:19:11.845 "progress": { 00:19:11.845 "blocks": 2560, 00:19:11.845 "percent": 32 00:19:11.845 } 00:19:11.845 }, 00:19:11.845 "base_bdevs_list": [ 00:19:11.845 { 00:19:11.845 "name": "spare", 00:19:11.845 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:11.845 "is_configured": true, 00:19:11.845 "data_offset": 256, 00:19:11.845 "data_size": 7936 00:19:11.845 }, 00:19:11.845 { 00:19:11.845 "name": "BaseBdev2", 00:19:11.845 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:11.845 "is_configured": true, 00:19:11.845 "data_offset": 256, 00:19:11.845 "data_size": 7936 00:19:11.845 } 00:19:11.845 ] 00:19:11.845 }' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:11.845 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=774 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.845 "name": "raid_bdev1", 00:19:11.845 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:11.845 "strip_size_kb": 0, 00:19:11.845 "state": "online", 00:19:11.845 "raid_level": "raid1", 00:19:11.845 "superblock": true, 00:19:11.845 "num_base_bdevs": 2, 00:19:11.845 "num_base_bdevs_discovered": 2, 00:19:11.845 "num_base_bdevs_operational": 2, 00:19:11.845 "process": { 00:19:11.845 "type": "rebuild", 00:19:11.845 "target": "spare", 00:19:11.845 "progress": { 00:19:11.845 "blocks": 2816, 00:19:11.845 "percent": 35 00:19:11.845 } 00:19:11.845 }, 00:19:11.845 "base_bdevs_list": [ 00:19:11.845 { 00:19:11.845 "name": "spare", 00:19:11.845 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:11.845 "is_configured": true, 00:19:11.845 "data_offset": 256, 00:19:11.845 "data_size": 7936 00:19:11.845 }, 00:19:11.845 { 00:19:11.845 "name": "BaseBdev2", 00:19:11.845 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:11.845 "is_configured": true, 00:19:11.845 "data_offset": 256, 00:19:11.845 "data_size": 7936 00:19:11.845 } 00:19:11.845 ] 00:19:11.845 }' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.845 12:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.780 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.038 "name": "raid_bdev1", 00:19:13.038 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:13.038 "strip_size_kb": 0, 00:19:13.038 "state": "online", 00:19:13.038 "raid_level": "raid1", 00:19:13.038 "superblock": true, 00:19:13.038 "num_base_bdevs": 2, 00:19:13.038 "num_base_bdevs_discovered": 2, 00:19:13.038 "num_base_bdevs_operational": 2, 00:19:13.038 "process": { 00:19:13.038 "type": "rebuild", 00:19:13.038 "target": "spare", 00:19:13.038 "progress": { 00:19:13.038 "blocks": 5888, 00:19:13.038 "percent": 74 00:19:13.038 } 00:19:13.038 }, 00:19:13.038 "base_bdevs_list": [ 00:19:13.038 { 00:19:13.038 "name": "spare", 00:19:13.038 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:13.038 "is_configured": true, 00:19:13.038 "data_offset": 256, 00:19:13.038 "data_size": 7936 00:19:13.038 }, 00:19:13.038 { 00:19:13.038 "name": "BaseBdev2", 00:19:13.038 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:13.038 "is_configured": true, 00:19:13.038 "data_offset": 256, 00:19:13.038 "data_size": 7936 00:19:13.038 } 00:19:13.038 ] 00:19:13.038 }' 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.038 12:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.606 [2024-11-06 12:50:02.224957] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:13.606 [2024-11-06 12:50:02.225378] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:13.606 [2024-11-06 12:50:02.225576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.173 "name": "raid_bdev1", 00:19:14.173 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:14.173 "strip_size_kb": 0, 00:19:14.173 "state": "online", 00:19:14.173 "raid_level": "raid1", 00:19:14.173 "superblock": true, 00:19:14.173 "num_base_bdevs": 2, 00:19:14.173 "num_base_bdevs_discovered": 2, 00:19:14.173 "num_base_bdevs_operational": 2, 00:19:14.173 "base_bdevs_list": [ 00:19:14.173 { 00:19:14.173 "name": "spare", 00:19:14.173 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:14.173 "is_configured": true, 00:19:14.173 "data_offset": 256, 00:19:14.173 "data_size": 7936 00:19:14.173 }, 00:19:14.173 { 00:19:14.173 "name": "BaseBdev2", 00:19:14.173 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:14.173 "is_configured": true, 00:19:14.173 "data_offset": 256, 00:19:14.173 "data_size": 7936 00:19:14.173 } 00:19:14.173 ] 00:19:14.173 }' 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.173 "name": "raid_bdev1", 00:19:14.173 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:14.173 "strip_size_kb": 0, 00:19:14.173 "state": "online", 00:19:14.173 "raid_level": "raid1", 00:19:14.173 "superblock": true, 00:19:14.173 "num_base_bdevs": 2, 00:19:14.173 "num_base_bdevs_discovered": 2, 00:19:14.173 "num_base_bdevs_operational": 2, 00:19:14.173 "base_bdevs_list": [ 00:19:14.173 { 00:19:14.173 "name": "spare", 00:19:14.173 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:14.173 "is_configured": true, 00:19:14.173 "data_offset": 256, 00:19:14.173 "data_size": 7936 00:19:14.173 }, 00:19:14.173 { 00:19:14.173 "name": "BaseBdev2", 00:19:14.173 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:14.173 "is_configured": true, 00:19:14.173 "data_offset": 256, 00:19:14.173 "data_size": 7936 00:19:14.173 } 00:19:14.173 ] 00:19:14.173 }' 00:19:14.173 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.432 "name": "raid_bdev1", 00:19:14.432 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:14.432 "strip_size_kb": 0, 00:19:14.432 "state": "online", 00:19:14.432 "raid_level": "raid1", 00:19:14.432 "superblock": true, 00:19:14.432 "num_base_bdevs": 2, 00:19:14.432 "num_base_bdevs_discovered": 2, 00:19:14.432 "num_base_bdevs_operational": 2, 00:19:14.432 "base_bdevs_list": [ 00:19:14.432 { 00:19:14.432 "name": "spare", 00:19:14.432 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:14.432 "is_configured": true, 00:19:14.432 "data_offset": 256, 00:19:14.432 "data_size": 7936 00:19:14.432 }, 00:19:14.432 { 00:19:14.432 "name": "BaseBdev2", 00:19:14.432 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:14.432 "is_configured": true, 00:19:14.432 "data_offset": 256, 00:19:14.432 "data_size": 7936 00:19:14.432 } 00:19:14.432 ] 00:19:14.432 }' 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.432 12:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.998 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:14.998 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.998 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.998 [2024-11-06 12:50:03.373624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.998 [2024-11-06 12:50:03.373669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.999 [2024-11-06 12:50:03.373805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.999 [2024-11-06 12:50:03.373923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.999 [2024-11-06 12:50:03.373942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.999 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:15.257 /dev/nbd0 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:15.257 1+0 records in 00:19:15.257 1+0 records out 00:19:15.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577129 s, 7.1 MB/s 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.257 12:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:15.515 /dev/nbd1 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:15.515 1+0 records in 00:19:15.515 1+0 records out 00:19:15.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509967 s, 8.0 MB/s 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.515 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:15.774 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.031 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.290 [2024-11-06 12:50:04.921364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.290 [2024-11-06 12:50:04.921436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.290 [2024-11-06 12:50:04.921473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:16.290 [2024-11-06 12:50:04.921490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.290 [2024-11-06 12:50:04.924445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.290 [2024-11-06 12:50:04.924492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.290 [2024-11-06 12:50:04.924580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:16.290 [2024-11-06 12:50:04.924659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.290 [2024-11-06 12:50:04.924847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.290 spare 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.290 12:50:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.549 [2024-11-06 12:50:05.024999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:16.549 [2024-11-06 12:50:05.025082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:16.549 [2024-11-06 12:50:05.025294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:16.549 [2024-11-06 12:50:05.025557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:16.549 [2024-11-06 12:50:05.025586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:16.549 [2024-11-06 12:50:05.025802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.549 "name": "raid_bdev1", 00:19:16.549 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:16.549 "strip_size_kb": 0, 00:19:16.549 "state": "online", 00:19:16.549 "raid_level": "raid1", 00:19:16.549 "superblock": true, 00:19:16.549 "num_base_bdevs": 2, 00:19:16.549 "num_base_bdevs_discovered": 2, 00:19:16.549 "num_base_bdevs_operational": 2, 00:19:16.549 "base_bdevs_list": [ 00:19:16.549 { 00:19:16.549 "name": "spare", 00:19:16.549 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:16.549 "is_configured": true, 00:19:16.549 "data_offset": 256, 00:19:16.549 "data_size": 7936 00:19:16.549 }, 00:19:16.549 { 00:19:16.549 "name": "BaseBdev2", 00:19:16.549 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:16.549 "is_configured": true, 00:19:16.549 "data_offset": 256, 00:19:16.549 "data_size": 7936 00:19:16.549 } 00:19:16.549 ] 00:19:16.549 }' 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.549 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.118 "name": "raid_bdev1", 00:19:17.118 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:17.118 "strip_size_kb": 0, 00:19:17.118 "state": "online", 00:19:17.118 "raid_level": "raid1", 00:19:17.118 "superblock": true, 00:19:17.118 "num_base_bdevs": 2, 00:19:17.118 "num_base_bdevs_discovered": 2, 00:19:17.118 "num_base_bdevs_operational": 2, 00:19:17.118 "base_bdevs_list": [ 00:19:17.118 { 00:19:17.118 "name": "spare", 00:19:17.118 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:17.118 "is_configured": true, 00:19:17.118 "data_offset": 256, 00:19:17.118 "data_size": 7936 00:19:17.118 }, 00:19:17.118 { 00:19:17.118 "name": "BaseBdev2", 00:19:17.118 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:17.118 "is_configured": true, 00:19:17.118 "data_offset": 256, 00:19:17.118 "data_size": 7936 00:19:17.118 } 00:19:17.118 ] 00:19:17.118 }' 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.118 [2024-11-06 12:50:05.762082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.118 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.119 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.384 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.384 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.384 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.384 "name": "raid_bdev1", 00:19:17.384 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:17.384 "strip_size_kb": 0, 00:19:17.384 "state": "online", 00:19:17.384 "raid_level": "raid1", 00:19:17.384 "superblock": true, 00:19:17.384 "num_base_bdevs": 2, 00:19:17.384 "num_base_bdevs_discovered": 1, 00:19:17.384 "num_base_bdevs_operational": 1, 00:19:17.384 "base_bdevs_list": [ 00:19:17.384 { 00:19:17.384 "name": null, 00:19:17.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.384 "is_configured": false, 00:19:17.384 "data_offset": 0, 00:19:17.384 "data_size": 7936 00:19:17.384 }, 00:19:17.384 { 00:19:17.384 "name": "BaseBdev2", 00:19:17.384 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:17.384 "is_configured": true, 00:19:17.384 "data_offset": 256, 00:19:17.384 "data_size": 7936 00:19:17.384 } 00:19:17.384 ] 00:19:17.384 }' 00:19:17.384 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.384 12:50:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.642 12:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:17.642 12:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.642 12:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:17.642 [2024-11-06 12:50:06.250234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.642 [2024-11-06 12:50:06.250533] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.642 [2024-11-06 12:50:06.250561] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:17.642 [2024-11-06 12:50:06.250609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.642 [2024-11-06 12:50:06.263596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:17.642 12:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.642 12:50:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:17.642 [2024-11-06 12:50:06.266382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.018 "name": "raid_bdev1", 00:19:19.018 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:19.018 "strip_size_kb": 0, 00:19:19.018 "state": "online", 00:19:19.018 "raid_level": "raid1", 00:19:19.018 "superblock": true, 00:19:19.018 "num_base_bdevs": 2, 00:19:19.018 "num_base_bdevs_discovered": 2, 00:19:19.018 "num_base_bdevs_operational": 2, 00:19:19.018 "process": { 00:19:19.018 "type": "rebuild", 00:19:19.018 "target": "spare", 00:19:19.018 "progress": { 00:19:19.018 "blocks": 2560, 00:19:19.018 "percent": 32 00:19:19.018 } 00:19:19.018 }, 00:19:19.018 "base_bdevs_list": [ 00:19:19.018 { 00:19:19.018 "name": "spare", 00:19:19.018 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:19.018 "is_configured": true, 00:19:19.018 "data_offset": 256, 00:19:19.018 "data_size": 7936 00:19:19.018 }, 00:19:19.018 { 00:19:19.018 "name": "BaseBdev2", 00:19:19.018 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:19.018 "is_configured": true, 00:19:19.018 "data_offset": 256, 00:19:19.018 "data_size": 7936 00:19:19.018 } 00:19:19.018 ] 00:19:19.018 }' 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.018 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.018 [2024-11-06 12:50:07.436394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.019 [2024-11-06 12:50:07.477999] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.019 [2024-11-06 12:50:07.478124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.019 [2024-11-06 12:50:07.478151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.019 [2024-11-06 12:50:07.478181] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.019 "name": "raid_bdev1", 00:19:19.019 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:19.019 "strip_size_kb": 0, 00:19:19.019 "state": "online", 00:19:19.019 "raid_level": "raid1", 00:19:19.019 "superblock": true, 00:19:19.019 "num_base_bdevs": 2, 00:19:19.019 "num_base_bdevs_discovered": 1, 00:19:19.019 "num_base_bdevs_operational": 1, 00:19:19.019 "base_bdevs_list": [ 00:19:19.019 { 00:19:19.019 "name": null, 00:19:19.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.019 "is_configured": false, 00:19:19.019 "data_offset": 0, 00:19:19.019 "data_size": 7936 00:19:19.019 }, 00:19:19.019 { 00:19:19.019 "name": "BaseBdev2", 00:19:19.019 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:19.019 "is_configured": true, 00:19:19.019 "data_offset": 256, 00:19:19.019 "data_size": 7936 00:19:19.019 } 00:19:19.019 ] 00:19:19.019 }' 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.019 12:50:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.586 12:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:19.586 12:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.586 12:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:19.586 [2024-11-06 12:50:08.017825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:19.586 [2024-11-06 12:50:08.017923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.586 [2024-11-06 12:50:08.017960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:19.586 [2024-11-06 12:50:08.017980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.586 [2024-11-06 12:50:08.018368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.586 [2024-11-06 12:50:08.018410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:19.586 [2024-11-06 12:50:08.018501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:19.586 [2024-11-06 12:50:08.018527] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.586 [2024-11-06 12:50:08.018541] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.586 [2024-11-06 12:50:08.018573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.586 [2024-11-06 12:50:08.031767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:19.586 spare 00:19:19.586 12:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.586 12:50:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:19.586 [2024-11-06 12:50:08.034580] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.521 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.521 "name": "raid_bdev1", 00:19:20.521 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:20.521 "strip_size_kb": 0, 00:19:20.521 "state": "online", 00:19:20.521 "raid_level": "raid1", 00:19:20.521 "superblock": true, 00:19:20.521 "num_base_bdevs": 2, 00:19:20.521 "num_base_bdevs_discovered": 2, 00:19:20.521 "num_base_bdevs_operational": 2, 00:19:20.521 "process": { 00:19:20.521 "type": "rebuild", 00:19:20.521 "target": "spare", 00:19:20.522 "progress": { 00:19:20.522 "blocks": 2560, 00:19:20.522 "percent": 32 00:19:20.522 } 00:19:20.522 }, 00:19:20.522 "base_bdevs_list": [ 00:19:20.522 { 00:19:20.522 "name": "spare", 00:19:20.522 "uuid": "40ff649e-6f07-5a2e-985e-4025327e1f01", 00:19:20.522 "is_configured": true, 00:19:20.522 "data_offset": 256, 00:19:20.522 "data_size": 7936 00:19:20.522 }, 00:19:20.522 { 00:19:20.522 "name": "BaseBdev2", 00:19:20.522 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:20.522 "is_configured": true, 00:19:20.522 "data_offset": 256, 00:19:20.522 "data_size": 7936 00:19:20.522 } 00:19:20.522 ] 00:19:20.522 }' 00:19:20.522 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.522 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.522 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.780 [2024-11-06 12:50:09.200806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.780 [2024-11-06 12:50:09.246501] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.780 [2024-11-06 12:50:09.246628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.780 [2024-11-06 12:50:09.246659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.780 [2024-11-06 12:50:09.246672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.780 "name": "raid_bdev1", 00:19:20.780 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:20.780 "strip_size_kb": 0, 00:19:20.780 "state": "online", 00:19:20.780 "raid_level": "raid1", 00:19:20.780 "superblock": true, 00:19:20.780 "num_base_bdevs": 2, 00:19:20.780 "num_base_bdevs_discovered": 1, 00:19:20.780 "num_base_bdevs_operational": 1, 00:19:20.780 "base_bdevs_list": [ 00:19:20.780 { 00:19:20.780 "name": null, 00:19:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.780 "is_configured": false, 00:19:20.780 "data_offset": 0, 00:19:20.780 "data_size": 7936 00:19:20.780 }, 00:19:20.780 { 00:19:20.780 "name": "BaseBdev2", 00:19:20.780 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:20.780 "is_configured": true, 00:19:20.780 "data_offset": 256, 00:19:20.780 "data_size": 7936 00:19:20.780 } 00:19:20.780 ] 00:19:20.780 }' 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.780 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.350 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.350 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.350 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.350 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.350 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.350 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.351 "name": "raid_bdev1", 00:19:21.351 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:21.351 "strip_size_kb": 0, 00:19:21.351 "state": "online", 00:19:21.351 "raid_level": "raid1", 00:19:21.351 "superblock": true, 00:19:21.351 "num_base_bdevs": 2, 00:19:21.351 "num_base_bdevs_discovered": 1, 00:19:21.351 "num_base_bdevs_operational": 1, 00:19:21.351 "base_bdevs_list": [ 00:19:21.351 { 00:19:21.351 "name": null, 00:19:21.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.351 "is_configured": false, 00:19:21.351 "data_offset": 0, 00:19:21.351 "data_size": 7936 00:19:21.351 }, 00:19:21.351 { 00:19:21.351 "name": "BaseBdev2", 00:19:21.351 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:21.351 "is_configured": true, 00:19:21.351 "data_offset": 256, 00:19:21.351 "data_size": 7936 00:19:21.351 } 00:19:21.351 ] 00:19:21.351 }' 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:21.351 [2024-11-06 12:50:09.974356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:21.351 [2024-11-06 12:50:09.974440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.351 [2024-11-06 12:50:09.974483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:21.351 [2024-11-06 12:50:09.974499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.351 [2024-11-06 12:50:09.974817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.351 [2024-11-06 12:50:09.974850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:21.351 [2024-11-06 12:50:09.974927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:21.351 [2024-11-06 12:50:09.974949] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.351 [2024-11-06 12:50:09.974964] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.351 [2024-11-06 12:50:09.974978] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:21.351 BaseBdev1 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.351 12:50:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:22.725 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.725 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.725 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.725 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.725 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.726 12:50:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.726 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.726 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.726 "name": "raid_bdev1", 00:19:22.726 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:22.726 "strip_size_kb": 0, 00:19:22.726 "state": "online", 00:19:22.726 "raid_level": "raid1", 00:19:22.726 "superblock": true, 00:19:22.726 "num_base_bdevs": 2, 00:19:22.726 "num_base_bdevs_discovered": 1, 00:19:22.726 "num_base_bdevs_operational": 1, 00:19:22.726 "base_bdevs_list": [ 00:19:22.726 { 00:19:22.726 "name": null, 00:19:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.726 "is_configured": false, 00:19:22.726 "data_offset": 0, 00:19:22.726 "data_size": 7936 00:19:22.726 }, 00:19:22.726 { 00:19:22.726 "name": "BaseBdev2", 00:19:22.726 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:22.726 "is_configured": true, 00:19:22.726 "data_offset": 256, 00:19:22.726 "data_size": 7936 00:19:22.726 } 00:19:22.726 ] 00:19:22.726 }' 00:19:22.726 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.726 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.985 "name": "raid_bdev1", 00:19:22.985 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:22.985 "strip_size_kb": 0, 00:19:22.985 "state": "online", 00:19:22.985 "raid_level": "raid1", 00:19:22.985 "superblock": true, 00:19:22.985 "num_base_bdevs": 2, 00:19:22.985 "num_base_bdevs_discovered": 1, 00:19:22.985 "num_base_bdevs_operational": 1, 00:19:22.985 "base_bdevs_list": [ 00:19:22.985 { 00:19:22.985 "name": null, 00:19:22.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.985 "is_configured": false, 00:19:22.985 "data_offset": 0, 00:19:22.985 "data_size": 7936 00:19:22.985 }, 00:19:22.985 { 00:19:22.985 "name": "BaseBdev2", 00:19:22.985 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:22.985 "is_configured": true, 00:19:22.985 "data_offset": 256, 00:19:22.985 "data_size": 7936 00:19:22.985 } 00:19:22.985 ] 00:19:22.985 }' 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.985 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.244 [2024-11-06 12:50:11.682954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.244 [2024-11-06 12:50:11.683255] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.244 [2024-11-06 12:50:11.683297] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.244 request: 00:19:23.244 { 00:19:23.244 "base_bdev": "BaseBdev1", 00:19:23.244 "raid_bdev": "raid_bdev1", 00:19:23.244 "method": "bdev_raid_add_base_bdev", 00:19:23.244 "req_id": 1 00:19:23.244 } 00:19:23.244 Got JSON-RPC error response 00:19:23.244 response: 00:19:23.244 { 00:19:23.244 "code": -22, 00:19:23.244 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:23.244 } 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.244 12:50:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.178 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.179 "name": "raid_bdev1", 00:19:24.179 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:24.179 "strip_size_kb": 0, 00:19:24.179 "state": "online", 00:19:24.179 "raid_level": "raid1", 00:19:24.179 "superblock": true, 00:19:24.179 "num_base_bdevs": 2, 00:19:24.179 "num_base_bdevs_discovered": 1, 00:19:24.179 "num_base_bdevs_operational": 1, 00:19:24.179 "base_bdevs_list": [ 00:19:24.179 { 00:19:24.179 "name": null, 00:19:24.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.179 "is_configured": false, 00:19:24.179 "data_offset": 0, 00:19:24.179 "data_size": 7936 00:19:24.179 }, 00:19:24.179 { 00:19:24.179 "name": "BaseBdev2", 00:19:24.179 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:24.179 "is_configured": true, 00:19:24.179 "data_offset": 256, 00:19:24.179 "data_size": 7936 00:19:24.179 } 00:19:24.179 ] 00:19:24.179 }' 00:19:24.179 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.179 12:50:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.746 "name": "raid_bdev1", 00:19:24.746 "uuid": "5946e16e-ca8b-49a8-a0fa-6385b26d6773", 00:19:24.746 "strip_size_kb": 0, 00:19:24.746 "state": "online", 00:19:24.746 "raid_level": "raid1", 00:19:24.746 "superblock": true, 00:19:24.746 "num_base_bdevs": 2, 00:19:24.746 "num_base_bdevs_discovered": 1, 00:19:24.746 "num_base_bdevs_operational": 1, 00:19:24.746 "base_bdevs_list": [ 00:19:24.746 { 00:19:24.746 "name": null, 00:19:24.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.746 "is_configured": false, 00:19:24.746 "data_offset": 0, 00:19:24.746 "data_size": 7936 00:19:24.746 }, 00:19:24.746 { 00:19:24.746 "name": "BaseBdev2", 00:19:24.746 "uuid": "5da768b7-dd4f-5316-80e3-4edd23449018", 00:19:24.746 "is_configured": true, 00:19:24.746 "data_offset": 256, 00:19:24.746 "data_size": 7936 00:19:24.746 } 00:19:24.746 ] 00:19:24.746 }' 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88320 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88320 ']' 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88320 00:19:24.746 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88320 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:25.004 killing process with pid 88320 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88320' 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88320 00:19:25.004 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.004 00:19:25.004 Latency(us) 00:19:25.004 [2024-11-06T12:50:13.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.004 [2024-11-06T12:50:13.661Z] =================================================================================================================== 00:19:25.004 [2024-11-06T12:50:13.661Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.004 [2024-11-06 12:50:13.433316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.004 12:50:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88320 00:19:25.004 [2024-11-06 12:50:13.433514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.004 [2024-11-06 12:50:13.433597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.005 [2024-11-06 12:50:13.433623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:25.263 [2024-11-06 12:50:13.749114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.685 12:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:26.685 00:19:26.685 real 0m21.884s 00:19:26.685 user 0m29.437s 00:19:26.685 sys 0m2.731s 00:19:26.685 12:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:26.685 12:50:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.685 ************************************ 00:19:26.685 END TEST raid_rebuild_test_sb_md_separate 00:19:26.685 ************************************ 00:19:26.685 12:50:14 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:26.685 12:50:14 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:26.685 12:50:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:26.685 12:50:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:26.685 12:50:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.685 ************************************ 00:19:26.685 START TEST raid_state_function_test_sb_md_interleaved 00:19:26.685 ************************************ 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89027 00:19:26.685 Process raid pid: 89027 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89027' 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89027 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89027 ']' 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.685 12:50:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.686 [2024-11-06 12:50:15.055389] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:19:26.686 [2024-11-06 12:50:15.056814] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.686 [2024-11-06 12:50:15.255057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.944 [2024-11-06 12:50:15.402129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.204 [2024-11-06 12:50:15.630523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.204 [2024-11-06 12:50:15.630598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.463 [2024-11-06 12:50:16.020019] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:27.463 [2024-11-06 12:50:16.020093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:27.463 [2024-11-06 12:50:16.020113] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.463 [2024-11-06 12:50:16.020130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.463 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.463 "name": "Existed_Raid", 00:19:27.463 "uuid": "899cb15b-d4d2-47a9-ab03-16769e410a7a", 00:19:27.463 "strip_size_kb": 0, 00:19:27.463 "state": "configuring", 00:19:27.463 "raid_level": "raid1", 00:19:27.463 "superblock": true, 00:19:27.463 "num_base_bdevs": 2, 00:19:27.463 "num_base_bdevs_discovered": 0, 00:19:27.463 "num_base_bdevs_operational": 2, 00:19:27.463 "base_bdevs_list": [ 00:19:27.463 { 00:19:27.463 "name": "BaseBdev1", 00:19:27.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.463 "is_configured": false, 00:19:27.463 "data_offset": 0, 00:19:27.463 "data_size": 0 00:19:27.463 }, 00:19:27.463 { 00:19:27.463 "name": "BaseBdev2", 00:19:27.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.463 "is_configured": false, 00:19:27.464 "data_offset": 0, 00:19:27.464 "data_size": 0 00:19:27.464 } 00:19:27.464 ] 00:19:27.464 }' 00:19:27.464 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.464 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 [2024-11-06 12:50:16.572086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.032 [2024-11-06 12:50:16.572138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 [2024-11-06 12:50:16.580059] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.032 [2024-11-06 12:50:16.580117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.032 [2024-11-06 12:50:16.580133] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.032 [2024-11-06 12:50:16.580158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 [2024-11-06 12:50:16.628839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.032 BaseBdev1 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 [ 00:19:28.032 { 00:19:28.032 "name": "BaseBdev1", 00:19:28.032 "aliases": [ 00:19:28.032 "b9eaedd2-3546-4bf7-b9ea-f09b7fc8f9ff" 00:19:28.032 ], 00:19:28.032 "product_name": "Malloc disk", 00:19:28.032 "block_size": 4128, 00:19:28.032 "num_blocks": 8192, 00:19:28.032 "uuid": "b9eaedd2-3546-4bf7-b9ea-f09b7fc8f9ff", 00:19:28.032 "md_size": 32, 00:19:28.032 "md_interleave": true, 00:19:28.032 "dif_type": 0, 00:19:28.032 "assigned_rate_limits": { 00:19:28.032 "rw_ios_per_sec": 0, 00:19:28.032 "rw_mbytes_per_sec": 0, 00:19:28.032 "r_mbytes_per_sec": 0, 00:19:28.032 "w_mbytes_per_sec": 0 00:19:28.032 }, 00:19:28.032 "claimed": true, 00:19:28.032 "claim_type": "exclusive_write", 00:19:28.032 "zoned": false, 00:19:28.032 "supported_io_types": { 00:19:28.032 "read": true, 00:19:28.032 "write": true, 00:19:28.032 "unmap": true, 00:19:28.032 "flush": true, 00:19:28.032 "reset": true, 00:19:28.032 "nvme_admin": false, 00:19:28.032 "nvme_io": false, 00:19:28.032 "nvme_io_md": false, 00:19:28.032 "write_zeroes": true, 00:19:28.032 "zcopy": true, 00:19:28.032 "get_zone_info": false, 00:19:28.032 "zone_management": false, 00:19:28.032 "zone_append": false, 00:19:28.032 "compare": false, 00:19:28.032 "compare_and_write": false, 00:19:28.032 "abort": true, 00:19:28.032 "seek_hole": false, 00:19:28.032 "seek_data": false, 00:19:28.032 "copy": true, 00:19:28.032 "nvme_iov_md": false 00:19:28.032 }, 00:19:28.032 "memory_domains": [ 00:19:28.032 { 00:19:28.032 "dma_device_id": "system", 00:19:28.032 "dma_device_type": 1 00:19:28.032 }, 00:19:28.032 { 00:19:28.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.032 "dma_device_type": 2 00:19:28.032 } 00:19:28.032 ], 00:19:28.032 "driver_specific": {} 00:19:28.032 } 00:19:28.032 ] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.032 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.291 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.291 "name": "Existed_Raid", 00:19:28.291 "uuid": "708fea33-666c-42c2-bea8-4a153bb765a0", 00:19:28.291 "strip_size_kb": 0, 00:19:28.291 "state": "configuring", 00:19:28.291 "raid_level": "raid1", 00:19:28.291 "superblock": true, 00:19:28.291 "num_base_bdevs": 2, 00:19:28.291 "num_base_bdevs_discovered": 1, 00:19:28.291 "num_base_bdevs_operational": 2, 00:19:28.291 "base_bdevs_list": [ 00:19:28.291 { 00:19:28.291 "name": "BaseBdev1", 00:19:28.291 "uuid": "b9eaedd2-3546-4bf7-b9ea-f09b7fc8f9ff", 00:19:28.291 "is_configured": true, 00:19:28.291 "data_offset": 256, 00:19:28.291 "data_size": 7936 00:19:28.291 }, 00:19:28.291 { 00:19:28.291 "name": "BaseBdev2", 00:19:28.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.291 "is_configured": false, 00:19:28.291 "data_offset": 0, 00:19:28.291 "data_size": 0 00:19:28.291 } 00:19:28.291 ] 00:19:28.291 }' 00:19:28.291 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.291 12:50:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.857 [2024-11-06 12:50:17.229107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.857 [2024-11-06 12:50:17.229183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.857 [2024-11-06 12:50:17.237157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.857 [2024-11-06 12:50:17.239808] bdev.c:8424:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.857 [2024-11-06 12:50:17.239869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:28.857 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.858 "name": "Existed_Raid", 00:19:28.858 "uuid": "36d50997-5d29-4708-a5a2-ab39fb53954e", 00:19:28.858 "strip_size_kb": 0, 00:19:28.858 "state": "configuring", 00:19:28.858 "raid_level": "raid1", 00:19:28.858 "superblock": true, 00:19:28.858 "num_base_bdevs": 2, 00:19:28.858 "num_base_bdevs_discovered": 1, 00:19:28.858 "num_base_bdevs_operational": 2, 00:19:28.858 "base_bdevs_list": [ 00:19:28.858 { 00:19:28.858 "name": "BaseBdev1", 00:19:28.858 "uuid": "b9eaedd2-3546-4bf7-b9ea-f09b7fc8f9ff", 00:19:28.858 "is_configured": true, 00:19:28.858 "data_offset": 256, 00:19:28.858 "data_size": 7936 00:19:28.858 }, 00:19:28.858 { 00:19:28.858 "name": "BaseBdev2", 00:19:28.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.858 "is_configured": false, 00:19:28.858 "data_offset": 0, 00:19:28.858 "data_size": 0 00:19:28.858 } 00:19:28.858 ] 00:19:28.858 }' 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.858 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.424 [2024-11-06 12:50:17.823330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.424 [2024-11-06 12:50:17.823703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:29.424 [2024-11-06 12:50:17.823724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:29.424 [2024-11-06 12:50:17.823840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:29.424 [2024-11-06 12:50:17.823949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:29.424 [2024-11-06 12:50:17.823969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:29.424 [2024-11-06 12:50:17.824059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.424 BaseBdev2 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.424 [ 00:19:29.424 { 00:19:29.424 "name": "BaseBdev2", 00:19:29.424 "aliases": [ 00:19:29.424 "a5481614-a4ac-452a-bd7b-fdf01ec1f7ff" 00:19:29.424 ], 00:19:29.424 "product_name": "Malloc disk", 00:19:29.424 "block_size": 4128, 00:19:29.424 "num_blocks": 8192, 00:19:29.424 "uuid": "a5481614-a4ac-452a-bd7b-fdf01ec1f7ff", 00:19:29.424 "md_size": 32, 00:19:29.424 "md_interleave": true, 00:19:29.424 "dif_type": 0, 00:19:29.424 "assigned_rate_limits": { 00:19:29.424 "rw_ios_per_sec": 0, 00:19:29.424 "rw_mbytes_per_sec": 0, 00:19:29.424 "r_mbytes_per_sec": 0, 00:19:29.424 "w_mbytes_per_sec": 0 00:19:29.424 }, 00:19:29.424 "claimed": true, 00:19:29.424 "claim_type": "exclusive_write", 00:19:29.424 "zoned": false, 00:19:29.424 "supported_io_types": { 00:19:29.424 "read": true, 00:19:29.424 "write": true, 00:19:29.424 "unmap": true, 00:19:29.424 "flush": true, 00:19:29.424 "reset": true, 00:19:29.424 "nvme_admin": false, 00:19:29.424 "nvme_io": false, 00:19:29.424 "nvme_io_md": false, 00:19:29.424 "write_zeroes": true, 00:19:29.424 "zcopy": true, 00:19:29.424 "get_zone_info": false, 00:19:29.424 "zone_management": false, 00:19:29.424 "zone_append": false, 00:19:29.424 "compare": false, 00:19:29.424 "compare_and_write": false, 00:19:29.424 "abort": true, 00:19:29.424 "seek_hole": false, 00:19:29.424 "seek_data": false, 00:19:29.424 "copy": true, 00:19:29.424 "nvme_iov_md": false 00:19:29.424 }, 00:19:29.424 "memory_domains": [ 00:19:29.424 { 00:19:29.424 "dma_device_id": "system", 00:19:29.424 "dma_device_type": 1 00:19:29.424 }, 00:19:29.424 { 00:19:29.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.424 "dma_device_type": 2 00:19:29.424 } 00:19:29.424 ], 00:19:29.424 "driver_specific": {} 00:19:29.424 } 00:19:29.424 ] 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.424 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.425 "name": "Existed_Raid", 00:19:29.425 "uuid": "36d50997-5d29-4708-a5a2-ab39fb53954e", 00:19:29.425 "strip_size_kb": 0, 00:19:29.425 "state": "online", 00:19:29.425 "raid_level": "raid1", 00:19:29.425 "superblock": true, 00:19:29.425 "num_base_bdevs": 2, 00:19:29.425 "num_base_bdevs_discovered": 2, 00:19:29.425 "num_base_bdevs_operational": 2, 00:19:29.425 "base_bdevs_list": [ 00:19:29.425 { 00:19:29.425 "name": "BaseBdev1", 00:19:29.425 "uuid": "b9eaedd2-3546-4bf7-b9ea-f09b7fc8f9ff", 00:19:29.425 "is_configured": true, 00:19:29.425 "data_offset": 256, 00:19:29.425 "data_size": 7936 00:19:29.425 }, 00:19:29.425 { 00:19:29.425 "name": "BaseBdev2", 00:19:29.425 "uuid": "a5481614-a4ac-452a-bd7b-fdf01ec1f7ff", 00:19:29.425 "is_configured": true, 00:19:29.425 "data_offset": 256, 00:19:29.425 "data_size": 7936 00:19:29.425 } 00:19:29.425 ] 00:19:29.425 }' 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.425 12:50:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.991 [2024-11-06 12:50:18.363963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.991 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:29.991 "name": "Existed_Raid", 00:19:29.991 "aliases": [ 00:19:29.991 "36d50997-5d29-4708-a5a2-ab39fb53954e" 00:19:29.991 ], 00:19:29.991 "product_name": "Raid Volume", 00:19:29.991 "block_size": 4128, 00:19:29.991 "num_blocks": 7936, 00:19:29.991 "uuid": "36d50997-5d29-4708-a5a2-ab39fb53954e", 00:19:29.992 "md_size": 32, 00:19:29.992 "md_interleave": true, 00:19:29.992 "dif_type": 0, 00:19:29.992 "assigned_rate_limits": { 00:19:29.992 "rw_ios_per_sec": 0, 00:19:29.992 "rw_mbytes_per_sec": 0, 00:19:29.992 "r_mbytes_per_sec": 0, 00:19:29.992 "w_mbytes_per_sec": 0 00:19:29.992 }, 00:19:29.992 "claimed": false, 00:19:29.992 "zoned": false, 00:19:29.992 "supported_io_types": { 00:19:29.992 "read": true, 00:19:29.992 "write": true, 00:19:29.992 "unmap": false, 00:19:29.992 "flush": false, 00:19:29.992 "reset": true, 00:19:29.992 "nvme_admin": false, 00:19:29.992 "nvme_io": false, 00:19:29.992 "nvme_io_md": false, 00:19:29.992 "write_zeroes": true, 00:19:29.992 "zcopy": false, 00:19:29.992 "get_zone_info": false, 00:19:29.992 "zone_management": false, 00:19:29.992 "zone_append": false, 00:19:29.992 "compare": false, 00:19:29.992 "compare_and_write": false, 00:19:29.992 "abort": false, 00:19:29.992 "seek_hole": false, 00:19:29.992 "seek_data": false, 00:19:29.992 "copy": false, 00:19:29.992 "nvme_iov_md": false 00:19:29.992 }, 00:19:29.992 "memory_domains": [ 00:19:29.992 { 00:19:29.992 "dma_device_id": "system", 00:19:29.992 "dma_device_type": 1 00:19:29.992 }, 00:19:29.992 { 00:19:29.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.992 "dma_device_type": 2 00:19:29.992 }, 00:19:29.992 { 00:19:29.992 "dma_device_id": "system", 00:19:29.992 "dma_device_type": 1 00:19:29.992 }, 00:19:29.992 { 00:19:29.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.992 "dma_device_type": 2 00:19:29.992 } 00:19:29.992 ], 00:19:29.992 "driver_specific": { 00:19:29.992 "raid": { 00:19:29.992 "uuid": "36d50997-5d29-4708-a5a2-ab39fb53954e", 00:19:29.992 "strip_size_kb": 0, 00:19:29.992 "state": "online", 00:19:29.992 "raid_level": "raid1", 00:19:29.992 "superblock": true, 00:19:29.992 "num_base_bdevs": 2, 00:19:29.992 "num_base_bdevs_discovered": 2, 00:19:29.992 "num_base_bdevs_operational": 2, 00:19:29.992 "base_bdevs_list": [ 00:19:29.992 { 00:19:29.992 "name": "BaseBdev1", 00:19:29.992 "uuid": "b9eaedd2-3546-4bf7-b9ea-f09b7fc8f9ff", 00:19:29.992 "is_configured": true, 00:19:29.992 "data_offset": 256, 00:19:29.992 "data_size": 7936 00:19:29.992 }, 00:19:29.992 { 00:19:29.992 "name": "BaseBdev2", 00:19:29.992 "uuid": "a5481614-a4ac-452a-bd7b-fdf01ec1f7ff", 00:19:29.992 "is_configured": true, 00:19:29.992 "data_offset": 256, 00:19:29.992 "data_size": 7936 00:19:29.992 } 00:19:29.992 ] 00:19:29.992 } 00:19:29.992 } 00:19:29.992 }' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:29.992 BaseBdev2' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.992 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.992 [2024-11-06 12:50:18.627763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.251 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.252 "name": "Existed_Raid", 00:19:30.252 "uuid": "36d50997-5d29-4708-a5a2-ab39fb53954e", 00:19:30.252 "strip_size_kb": 0, 00:19:30.252 "state": "online", 00:19:30.252 "raid_level": "raid1", 00:19:30.252 "superblock": true, 00:19:30.252 "num_base_bdevs": 2, 00:19:30.252 "num_base_bdevs_discovered": 1, 00:19:30.252 "num_base_bdevs_operational": 1, 00:19:30.252 "base_bdevs_list": [ 00:19:30.252 { 00:19:30.252 "name": null, 00:19:30.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.252 "is_configured": false, 00:19:30.252 "data_offset": 0, 00:19:30.252 "data_size": 7936 00:19:30.252 }, 00:19:30.252 { 00:19:30.252 "name": "BaseBdev2", 00:19:30.252 "uuid": "a5481614-a4ac-452a-bd7b-fdf01ec1f7ff", 00:19:30.252 "is_configured": true, 00:19:30.252 "data_offset": 256, 00:19:30.252 "data_size": 7936 00:19:30.252 } 00:19:30.252 ] 00:19:30.252 }' 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.252 12:50:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.818 [2024-11-06 12:50:19.285900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:30.818 [2024-11-06 12:50:19.286066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.818 [2024-11-06 12:50:19.379470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.818 [2024-11-06 12:50:19.379809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.818 [2024-11-06 12:50:19.379979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89027 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89027 ']' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89027 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89027 00:19:30.818 killing process with pid 89027 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89027' 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89027 00:19:30.818 [2024-11-06 12:50:19.469001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.818 12:50:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89027 00:19:31.076 [2024-11-06 12:50:19.484319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.010 12:50:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:32.010 00:19:32.010 real 0m5.666s 00:19:32.010 user 0m8.435s 00:19:32.010 sys 0m0.912s 00:19:32.010 12:50:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.010 12:50:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.010 ************************************ 00:19:32.010 END TEST raid_state_function_test_sb_md_interleaved 00:19:32.010 ************************************ 00:19:32.010 12:50:20 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:32.010 12:50:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:32.010 12:50:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.010 12:50:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.010 ************************************ 00:19:32.010 START TEST raid_superblock_test_md_interleaved 00:19:32.010 ************************************ 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89275 00:19:32.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89275 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89275 ']' 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.010 12:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.268 [2024-11-06 12:50:20.774241] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:19:32.268 [2024-11-06 12:50:20.774450] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89275 ] 00:19:32.527 [2024-11-06 12:50:20.960978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.527 [2024-11-06 12:50:21.112677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.785 [2024-11-06 12:50:21.344204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.785 [2024-11-06 12:50:21.344264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.351 malloc1 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.351 [2024-11-06 12:50:21.839137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:33.351 [2024-11-06 12:50:21.839286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.351 [2024-11-06 12:50:21.839347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:33.351 [2024-11-06 12:50:21.839392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.351 [2024-11-06 12:50:21.842944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.351 [2024-11-06 12:50:21.843008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:33.351 pt1 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.351 malloc2 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.351 [2024-11-06 12:50:21.905560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.351 [2024-11-06 12:50:21.905633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.351 [2024-11-06 12:50:21.905671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:33.351 [2024-11-06 12:50:21.905687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.351 [2024-11-06 12:50:21.908580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.351 [2024-11-06 12:50:21.908626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.351 pt2 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.351 [2024-11-06 12:50:21.913613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.351 [2024-11-06 12:50:21.916475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.351 [2024-11-06 12:50:21.916864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:33.351 [2024-11-06 12:50:21.917011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:33.351 [2024-11-06 12:50:21.917158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:33.351 [2024-11-06 12:50:21.917436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:33.351 [2024-11-06 12:50:21.917560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:33.351 [2024-11-06 12:50:21.917864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.351 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.352 "name": "raid_bdev1", 00:19:33.352 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:33.352 "strip_size_kb": 0, 00:19:33.352 "state": "online", 00:19:33.352 "raid_level": "raid1", 00:19:33.352 "superblock": true, 00:19:33.352 "num_base_bdevs": 2, 00:19:33.352 "num_base_bdevs_discovered": 2, 00:19:33.352 "num_base_bdevs_operational": 2, 00:19:33.352 "base_bdevs_list": [ 00:19:33.352 { 00:19:33.352 "name": "pt1", 00:19:33.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.352 "is_configured": true, 00:19:33.352 "data_offset": 256, 00:19:33.352 "data_size": 7936 00:19:33.352 }, 00:19:33.352 { 00:19:33.352 "name": "pt2", 00:19:33.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.352 "is_configured": true, 00:19:33.352 "data_offset": 256, 00:19:33.352 "data_size": 7936 00:19:33.352 } 00:19:33.352 ] 00:19:33.352 }' 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.352 12:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:33.927 [2024-11-06 12:50:22.438493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.927 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.927 "name": "raid_bdev1", 00:19:33.927 "aliases": [ 00:19:33.927 "56e474e7-a03e-4b89-b21d-964c62133bbf" 00:19:33.927 ], 00:19:33.927 "product_name": "Raid Volume", 00:19:33.927 "block_size": 4128, 00:19:33.927 "num_blocks": 7936, 00:19:33.927 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:33.927 "md_size": 32, 00:19:33.927 "md_interleave": true, 00:19:33.927 "dif_type": 0, 00:19:33.927 "assigned_rate_limits": { 00:19:33.927 "rw_ios_per_sec": 0, 00:19:33.927 "rw_mbytes_per_sec": 0, 00:19:33.927 "r_mbytes_per_sec": 0, 00:19:33.927 "w_mbytes_per_sec": 0 00:19:33.927 }, 00:19:33.927 "claimed": false, 00:19:33.927 "zoned": false, 00:19:33.927 "supported_io_types": { 00:19:33.927 "read": true, 00:19:33.927 "write": true, 00:19:33.927 "unmap": false, 00:19:33.927 "flush": false, 00:19:33.927 "reset": true, 00:19:33.927 "nvme_admin": false, 00:19:33.927 "nvme_io": false, 00:19:33.927 "nvme_io_md": false, 00:19:33.927 "write_zeroes": true, 00:19:33.927 "zcopy": false, 00:19:33.927 "get_zone_info": false, 00:19:33.927 "zone_management": false, 00:19:33.927 "zone_append": false, 00:19:33.927 "compare": false, 00:19:33.927 "compare_and_write": false, 00:19:33.927 "abort": false, 00:19:33.927 "seek_hole": false, 00:19:33.927 "seek_data": false, 00:19:33.927 "copy": false, 00:19:33.927 "nvme_iov_md": false 00:19:33.927 }, 00:19:33.927 "memory_domains": [ 00:19:33.927 { 00:19:33.927 "dma_device_id": "system", 00:19:33.927 "dma_device_type": 1 00:19:33.927 }, 00:19:33.927 { 00:19:33.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.927 "dma_device_type": 2 00:19:33.927 }, 00:19:33.928 { 00:19:33.928 "dma_device_id": "system", 00:19:33.928 "dma_device_type": 1 00:19:33.928 }, 00:19:33.928 { 00:19:33.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.928 "dma_device_type": 2 00:19:33.928 } 00:19:33.928 ], 00:19:33.928 "driver_specific": { 00:19:33.928 "raid": { 00:19:33.928 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:33.928 "strip_size_kb": 0, 00:19:33.928 "state": "online", 00:19:33.928 "raid_level": "raid1", 00:19:33.928 "superblock": true, 00:19:33.928 "num_base_bdevs": 2, 00:19:33.928 "num_base_bdevs_discovered": 2, 00:19:33.928 "num_base_bdevs_operational": 2, 00:19:33.928 "base_bdevs_list": [ 00:19:33.928 { 00:19:33.928 "name": "pt1", 00:19:33.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.928 "is_configured": true, 00:19:33.928 "data_offset": 256, 00:19:33.928 "data_size": 7936 00:19:33.928 }, 00:19:33.928 { 00:19:33.928 "name": "pt2", 00:19:33.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.928 "is_configured": true, 00:19:33.928 "data_offset": 256, 00:19:33.928 "data_size": 7936 00:19:33.928 } 00:19:33.928 ] 00:19:33.928 } 00:19:33.928 } 00:19:33.928 }' 00:19:33.928 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:33.928 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:33.928 pt2' 00:19:33.928 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.200 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:34.200 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:34.201 [2024-11-06 12:50:22.686470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=56e474e7-a03e-4b89-b21d-964c62133bbf 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 56e474e7-a03e-4b89-b21d-964c62133bbf ']' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 [2024-11-06 12:50:22.738095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.201 [2024-11-06 12:50:22.738269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.201 [2024-11-06 12:50:22.738484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.201 [2024-11-06 12:50:22.738715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.201 [2024-11-06 12:50:22.738855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:34.201 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.461 [2024-11-06 12:50:22.878180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:34.461 [2024-11-06 12:50:22.881364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:34.461 [2024-11-06 12:50:22.881494] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:34.461 [2024-11-06 12:50:22.881604] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:34.461 [2024-11-06 12:50:22.881631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.461 [2024-11-06 12:50:22.881661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:34.461 request: 00:19:34.461 { 00:19:34.461 "name": "raid_bdev1", 00:19:34.461 "raid_level": "raid1", 00:19:34.461 "base_bdevs": [ 00:19:34.461 "malloc1", 00:19:34.461 "malloc2" 00:19:34.461 ], 00:19:34.461 "superblock": false, 00:19:34.461 "method": "bdev_raid_create", 00:19:34.461 "req_id": 1 00:19:34.461 } 00:19:34.461 Got JSON-RPC error response 00:19:34.461 response: 00:19:34.461 { 00:19:34.461 "code": -17, 00:19:34.461 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:34.461 } 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.461 [2024-11-06 12:50:22.946399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.461 [2024-11-06 12:50:22.946484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.461 [2024-11-06 12:50:22.946513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:34.461 [2024-11-06 12:50:22.946531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.461 [2024-11-06 12:50:22.949573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.461 [2024-11-06 12:50:22.949822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.461 [2024-11-06 12:50:22.949911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:34.461 [2024-11-06 12:50:22.949992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.461 pt1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.461 12:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.461 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.461 "name": "raid_bdev1", 00:19:34.462 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:34.462 "strip_size_kb": 0, 00:19:34.462 "state": "configuring", 00:19:34.462 "raid_level": "raid1", 00:19:34.462 "superblock": true, 00:19:34.462 "num_base_bdevs": 2, 00:19:34.462 "num_base_bdevs_discovered": 1, 00:19:34.462 "num_base_bdevs_operational": 2, 00:19:34.462 "base_bdevs_list": [ 00:19:34.462 { 00:19:34.462 "name": "pt1", 00:19:34.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.462 "is_configured": true, 00:19:34.462 "data_offset": 256, 00:19:34.462 "data_size": 7936 00:19:34.462 }, 00:19:34.462 { 00:19:34.462 "name": null, 00:19:34.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.462 "is_configured": false, 00:19:34.462 "data_offset": 256, 00:19:34.462 "data_size": 7936 00:19:34.462 } 00:19:34.462 ] 00:19:34.462 }' 00:19:34.462 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.462 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.030 [2024-11-06 12:50:23.474695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.030 [2024-11-06 12:50:23.474807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.030 [2024-11-06 12:50:23.474846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:35.030 [2024-11-06 12:50:23.474865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.030 [2024-11-06 12:50:23.475143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.030 [2024-11-06 12:50:23.475173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.030 [2024-11-06 12:50:23.475285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:35.030 [2024-11-06 12:50:23.475328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.030 [2024-11-06 12:50:23.475475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:35.030 [2024-11-06 12:50:23.475498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:35.030 [2024-11-06 12:50:23.475592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:35.030 [2024-11-06 12:50:23.475711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:35.030 [2024-11-06 12:50:23.475809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:35.030 [2024-11-06 12:50:23.475936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.030 pt2 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.030 "name": "raid_bdev1", 00:19:35.030 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:35.030 "strip_size_kb": 0, 00:19:35.030 "state": "online", 00:19:35.030 "raid_level": "raid1", 00:19:35.030 "superblock": true, 00:19:35.030 "num_base_bdevs": 2, 00:19:35.030 "num_base_bdevs_discovered": 2, 00:19:35.030 "num_base_bdevs_operational": 2, 00:19:35.030 "base_bdevs_list": [ 00:19:35.030 { 00:19:35.030 "name": "pt1", 00:19:35.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.030 "is_configured": true, 00:19:35.030 "data_offset": 256, 00:19:35.030 "data_size": 7936 00:19:35.030 }, 00:19:35.030 { 00:19:35.030 "name": "pt2", 00:19:35.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.030 "is_configured": true, 00:19:35.030 "data_offset": 256, 00:19:35.030 "data_size": 7936 00:19:35.030 } 00:19:35.030 ] 00:19:35.030 }' 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.030 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.597 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.598 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.598 12:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.598 [2024-11-06 12:50:24.003247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.598 "name": "raid_bdev1", 00:19:35.598 "aliases": [ 00:19:35.598 "56e474e7-a03e-4b89-b21d-964c62133bbf" 00:19:35.598 ], 00:19:35.598 "product_name": "Raid Volume", 00:19:35.598 "block_size": 4128, 00:19:35.598 "num_blocks": 7936, 00:19:35.598 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:35.598 "md_size": 32, 00:19:35.598 "md_interleave": true, 00:19:35.598 "dif_type": 0, 00:19:35.598 "assigned_rate_limits": { 00:19:35.598 "rw_ios_per_sec": 0, 00:19:35.598 "rw_mbytes_per_sec": 0, 00:19:35.598 "r_mbytes_per_sec": 0, 00:19:35.598 "w_mbytes_per_sec": 0 00:19:35.598 }, 00:19:35.598 "claimed": false, 00:19:35.598 "zoned": false, 00:19:35.598 "supported_io_types": { 00:19:35.598 "read": true, 00:19:35.598 "write": true, 00:19:35.598 "unmap": false, 00:19:35.598 "flush": false, 00:19:35.598 "reset": true, 00:19:35.598 "nvme_admin": false, 00:19:35.598 "nvme_io": false, 00:19:35.598 "nvme_io_md": false, 00:19:35.598 "write_zeroes": true, 00:19:35.598 "zcopy": false, 00:19:35.598 "get_zone_info": false, 00:19:35.598 "zone_management": false, 00:19:35.598 "zone_append": false, 00:19:35.598 "compare": false, 00:19:35.598 "compare_and_write": false, 00:19:35.598 "abort": false, 00:19:35.598 "seek_hole": false, 00:19:35.598 "seek_data": false, 00:19:35.598 "copy": false, 00:19:35.598 "nvme_iov_md": false 00:19:35.598 }, 00:19:35.598 "memory_domains": [ 00:19:35.598 { 00:19:35.598 "dma_device_id": "system", 00:19:35.598 "dma_device_type": 1 00:19:35.598 }, 00:19:35.598 { 00:19:35.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.598 "dma_device_type": 2 00:19:35.598 }, 00:19:35.598 { 00:19:35.598 "dma_device_id": "system", 00:19:35.598 "dma_device_type": 1 00:19:35.598 }, 00:19:35.598 { 00:19:35.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.598 "dma_device_type": 2 00:19:35.598 } 00:19:35.598 ], 00:19:35.598 "driver_specific": { 00:19:35.598 "raid": { 00:19:35.598 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:35.598 "strip_size_kb": 0, 00:19:35.598 "state": "online", 00:19:35.598 "raid_level": "raid1", 00:19:35.598 "superblock": true, 00:19:35.598 "num_base_bdevs": 2, 00:19:35.598 "num_base_bdevs_discovered": 2, 00:19:35.598 "num_base_bdevs_operational": 2, 00:19:35.598 "base_bdevs_list": [ 00:19:35.598 { 00:19:35.598 "name": "pt1", 00:19:35.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.598 "is_configured": true, 00:19:35.598 "data_offset": 256, 00:19:35.598 "data_size": 7936 00:19:35.598 }, 00:19:35.598 { 00:19:35.598 "name": "pt2", 00:19:35.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.598 "is_configured": true, 00:19:35.598 "data_offset": 256, 00:19:35.598 "data_size": 7936 00:19:35.598 } 00:19:35.598 ] 00:19:35.598 } 00:19:35.598 } 00:19:35.598 }' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:35.598 pt2' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.598 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.857 [2024-11-06 12:50:24.279247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 56e474e7-a03e-4b89-b21d-964c62133bbf '!=' 56e474e7-a03e-4b89-b21d-964c62133bbf ']' 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.857 [2024-11-06 12:50:24.330970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.857 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.857 "name": "raid_bdev1", 00:19:35.857 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:35.857 "strip_size_kb": 0, 00:19:35.857 "state": "online", 00:19:35.857 "raid_level": "raid1", 00:19:35.857 "superblock": true, 00:19:35.857 "num_base_bdevs": 2, 00:19:35.857 "num_base_bdevs_discovered": 1, 00:19:35.857 "num_base_bdevs_operational": 1, 00:19:35.857 "base_bdevs_list": [ 00:19:35.857 { 00:19:35.857 "name": null, 00:19:35.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.857 "is_configured": false, 00:19:35.857 "data_offset": 0, 00:19:35.857 "data_size": 7936 00:19:35.857 }, 00:19:35.857 { 00:19:35.858 "name": "pt2", 00:19:35.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.858 "is_configured": true, 00:19:35.858 "data_offset": 256, 00:19:35.858 "data_size": 7936 00:19:35.858 } 00:19:35.858 ] 00:19:35.858 }' 00:19:35.858 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.858 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.436 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.436 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.436 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.436 [2024-11-06 12:50:24.863176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.436 [2024-11-06 12:50:24.863371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.436 [2024-11-06 12:50:24.863639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.436 [2024-11-06 12:50:24.863742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.436 [2024-11-06 12:50:24.863765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:36.436 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.437 [2024-11-06 12:50:24.935139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.437 [2024-11-06 12:50:24.935250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.437 [2024-11-06 12:50:24.935281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:36.437 [2024-11-06 12:50:24.935301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.437 [2024-11-06 12:50:24.938244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.437 [2024-11-06 12:50:24.938320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.437 [2024-11-06 12:50:24.938411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.437 [2024-11-06 12:50:24.938482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.437 [2024-11-06 12:50:24.938618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:36.437 [2024-11-06 12:50:24.938640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:36.437 [2024-11-06 12:50:24.938755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:36.437 [2024-11-06 12:50:24.938864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:36.437 [2024-11-06 12:50:24.938879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:36.437 [2024-11-06 12:50:24.938963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.437 pt2 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.437 "name": "raid_bdev1", 00:19:36.437 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:36.437 "strip_size_kb": 0, 00:19:36.437 "state": "online", 00:19:36.437 "raid_level": "raid1", 00:19:36.437 "superblock": true, 00:19:36.437 "num_base_bdevs": 2, 00:19:36.437 "num_base_bdevs_discovered": 1, 00:19:36.437 "num_base_bdevs_operational": 1, 00:19:36.437 "base_bdevs_list": [ 00:19:36.437 { 00:19:36.437 "name": null, 00:19:36.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.437 "is_configured": false, 00:19:36.437 "data_offset": 256, 00:19:36.437 "data_size": 7936 00:19:36.437 }, 00:19:36.437 { 00:19:36.437 "name": "pt2", 00:19:36.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.437 "is_configured": true, 00:19:36.437 "data_offset": 256, 00:19:36.437 "data_size": 7936 00:19:36.437 } 00:19:36.437 ] 00:19:36.437 }' 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.437 12:50:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.004 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:37.004 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.004 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.004 [2024-11-06 12:50:25.479321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.004 [2024-11-06 12:50:25.479362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.004 [2024-11-06 12:50:25.479534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.004 [2024-11-06 12:50:25.479619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.004 [2024-11-06 12:50:25.479638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:37.004 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.004 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.004 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.005 [2024-11-06 12:50:25.543332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:37.005 [2024-11-06 12:50:25.543563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.005 [2024-11-06 12:50:25.543610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:37.005 [2024-11-06 12:50:25.543628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.005 [2024-11-06 12:50:25.546794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.005 [2024-11-06 12:50:25.546972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:37.005 [2024-11-06 12:50:25.547062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:37.005 [2024-11-06 12:50:25.547129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:37.005 [2024-11-06 12:50:25.547313] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:37.005 [2024-11-06 12:50:25.547332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.005 [2024-11-06 12:50:25.547355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:37.005 [2024-11-06 12:50:25.547458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.005 pt1 00:19:37.005 [2024-11-06 12:50:25.547619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:37.005 [2024-11-06 12:50:25.547636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:37.005 [2024-11-06 12:50:25.547756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:37.005 [2024-11-06 12:50:25.547892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:37.005 [2024-11-06 12:50:25.547911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:37.005 [2024-11-06 12:50:25.548009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.005 "name": "raid_bdev1", 00:19:37.005 "uuid": "56e474e7-a03e-4b89-b21d-964c62133bbf", 00:19:37.005 "strip_size_kb": 0, 00:19:37.005 "state": "online", 00:19:37.005 "raid_level": "raid1", 00:19:37.005 "superblock": true, 00:19:37.005 "num_base_bdevs": 2, 00:19:37.005 "num_base_bdevs_discovered": 1, 00:19:37.005 "num_base_bdevs_operational": 1, 00:19:37.005 "base_bdevs_list": [ 00:19:37.005 { 00:19:37.005 "name": null, 00:19:37.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.005 "is_configured": false, 00:19:37.005 "data_offset": 256, 00:19:37.005 "data_size": 7936 00:19:37.005 }, 00:19:37.005 { 00:19:37.005 "name": "pt2", 00:19:37.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.005 "is_configured": true, 00:19:37.005 "data_offset": 256, 00:19:37.005 "data_size": 7936 00:19:37.005 } 00:19:37.005 ] 00:19:37.005 }' 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.005 12:50:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.573 [2024-11-06 12:50:26.131946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 56e474e7-a03e-4b89-b21d-964c62133bbf '!=' 56e474e7-a03e-4b89-b21d-964c62133bbf ']' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89275 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89275 ']' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89275 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89275 00:19:37.573 killing process with pid 89275 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89275' 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89275 00:19:37.573 [2024-11-06 12:50:26.219288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.573 12:50:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89275 00:19:37.573 [2024-11-06 12:50:26.219448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.573 [2024-11-06 12:50:26.219526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.573 [2024-11-06 12:50:26.219552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:37.845 [2024-11-06 12:50:26.402916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.808 ************************************ 00:19:38.808 END TEST raid_superblock_test_md_interleaved 00:19:38.808 ************************************ 00:19:38.808 12:50:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:38.808 00:19:38.808 real 0m6.799s 00:19:38.808 user 0m10.740s 00:19:38.808 sys 0m1.025s 00:19:38.808 12:50:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:38.808 12:50:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.067 12:50:27 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:39.067 12:50:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:39.067 12:50:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:39.067 12:50:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.067 ************************************ 00:19:39.067 START TEST raid_rebuild_test_sb_md_interleaved 00:19:39.067 ************************************ 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:39.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89609 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89609 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89609 ']' 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:39.067 12:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.067 [2024-11-06 12:50:27.633948] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:19:39.067 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:39.067 Zero copy mechanism will not be used. 00:19:39.067 [2024-11-06 12:50:27.634511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89609 ] 00:19:39.327 [2024-11-06 12:50:27.825365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.327 [2024-11-06 12:50:27.979033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.585 [2024-11-06 12:50:28.212544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.585 [2024-11-06 12:50:28.212594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.152 BaseBdev1_malloc 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.152 [2024-11-06 12:50:28.711812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:40.152 [2024-11-06 12:50:28.711918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.152 [2024-11-06 12:50:28.711951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:40.152 [2024-11-06 12:50:28.711976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.152 [2024-11-06 12:50:28.714856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.152 [2024-11-06 12:50:28.714910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:40.152 BaseBdev1 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.152 BaseBdev2_malloc 00:19:40.152 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.153 [2024-11-06 12:50:28.769151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:40.153 [2024-11-06 12:50:28.769274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.153 [2024-11-06 12:50:28.769306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:40.153 [2024-11-06 12:50:28.769325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.153 [2024-11-06 12:50:28.772009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.153 [2024-11-06 12:50:28.772071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:40.153 BaseBdev2 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.153 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 spare_malloc 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 spare_delay 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 [2024-11-06 12:50:28.848132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.411 [2024-11-06 12:50:28.848250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.411 [2024-11-06 12:50:28.848284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:40.411 [2024-11-06 12:50:28.848303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.411 [2024-11-06 12:50:28.851047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.411 [2024-11-06 12:50:28.851281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.411 spare 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.411 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 [2024-11-06 12:50:28.856227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.411 [2024-11-06 12:50:28.858860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.411 [2024-11-06 12:50:28.859130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:40.411 [2024-11-06 12:50:28.859152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:40.412 [2024-11-06 12:50:28.859296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:40.412 [2024-11-06 12:50:28.859429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:40.412 [2024-11-06 12:50:28.859452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:40.412 [2024-11-06 12:50:28.859552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.412 "name": "raid_bdev1", 00:19:40.412 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:40.412 "strip_size_kb": 0, 00:19:40.412 "state": "online", 00:19:40.412 "raid_level": "raid1", 00:19:40.412 "superblock": true, 00:19:40.412 "num_base_bdevs": 2, 00:19:40.412 "num_base_bdevs_discovered": 2, 00:19:40.412 "num_base_bdevs_operational": 2, 00:19:40.412 "base_bdevs_list": [ 00:19:40.412 { 00:19:40.412 "name": "BaseBdev1", 00:19:40.412 "uuid": "28b14f77-cd27-5c00-b0c9-9c4427394a42", 00:19:40.412 "is_configured": true, 00:19:40.412 "data_offset": 256, 00:19:40.412 "data_size": 7936 00:19:40.412 }, 00:19:40.412 { 00:19:40.412 "name": "BaseBdev2", 00:19:40.412 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:40.412 "is_configured": true, 00:19:40.412 "data_offset": 256, 00:19:40.412 "data_size": 7936 00:19:40.412 } 00:19:40.412 ] 00:19:40.412 }' 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.412 12:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.980 [2024-11-06 12:50:29.336867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.980 [2024-11-06 12:50:29.416385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.980 "name": "raid_bdev1", 00:19:40.980 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:40.980 "strip_size_kb": 0, 00:19:40.980 "state": "online", 00:19:40.980 "raid_level": "raid1", 00:19:40.980 "superblock": true, 00:19:40.980 "num_base_bdevs": 2, 00:19:40.980 "num_base_bdevs_discovered": 1, 00:19:40.980 "num_base_bdevs_operational": 1, 00:19:40.980 "base_bdevs_list": [ 00:19:40.980 { 00:19:40.980 "name": null, 00:19:40.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.980 "is_configured": false, 00:19:40.980 "data_offset": 0, 00:19:40.980 "data_size": 7936 00:19:40.980 }, 00:19:40.980 { 00:19:40.980 "name": "BaseBdev2", 00:19:40.980 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:40.980 "is_configured": true, 00:19:40.980 "data_offset": 256, 00:19:40.980 "data_size": 7936 00:19:40.980 } 00:19:40.980 ] 00:19:40.980 }' 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.980 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.547 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.547 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.547 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:41.547 [2024-11-06 12:50:29.916672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.547 [2024-11-06 12:50:29.934786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:41.547 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.547 12:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:41.547 [2024-11-06 12:50:29.937557] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.481 12:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.481 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.481 "name": "raid_bdev1", 00:19:42.481 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:42.481 "strip_size_kb": 0, 00:19:42.481 "state": "online", 00:19:42.481 "raid_level": "raid1", 00:19:42.481 "superblock": true, 00:19:42.481 "num_base_bdevs": 2, 00:19:42.481 "num_base_bdevs_discovered": 2, 00:19:42.481 "num_base_bdevs_operational": 2, 00:19:42.481 "process": { 00:19:42.481 "type": "rebuild", 00:19:42.481 "target": "spare", 00:19:42.481 "progress": { 00:19:42.481 "blocks": 2560, 00:19:42.481 "percent": 32 00:19:42.481 } 00:19:42.481 }, 00:19:42.481 "base_bdevs_list": [ 00:19:42.481 { 00:19:42.481 "name": "spare", 00:19:42.481 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:42.481 "is_configured": true, 00:19:42.481 "data_offset": 256, 00:19:42.481 "data_size": 7936 00:19:42.481 }, 00:19:42.481 { 00:19:42.481 "name": "BaseBdev2", 00:19:42.481 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:42.481 "is_configured": true, 00:19:42.482 "data_offset": 256, 00:19:42.482 "data_size": 7936 00:19:42.482 } 00:19:42.482 ] 00:19:42.482 }' 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.482 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.482 [2024-11-06 12:50:31.115091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:42.740 [2024-11-06 12:50:31.148742] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:42.740 [2024-11-06 12:50:31.148984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.740 [2024-11-06 12:50:31.149016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:42.740 [2024-11-06 12:50:31.149038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.740 "name": "raid_bdev1", 00:19:42.740 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:42.740 "strip_size_kb": 0, 00:19:42.740 "state": "online", 00:19:42.740 "raid_level": "raid1", 00:19:42.740 "superblock": true, 00:19:42.740 "num_base_bdevs": 2, 00:19:42.740 "num_base_bdevs_discovered": 1, 00:19:42.740 "num_base_bdevs_operational": 1, 00:19:42.740 "base_bdevs_list": [ 00:19:42.740 { 00:19:42.740 "name": null, 00:19:42.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.740 "is_configured": false, 00:19:42.740 "data_offset": 0, 00:19:42.740 "data_size": 7936 00:19:42.740 }, 00:19:42.740 { 00:19:42.740 "name": "BaseBdev2", 00:19:42.740 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:42.740 "is_configured": true, 00:19:42.740 "data_offset": 256, 00:19:42.740 "data_size": 7936 00:19:42.740 } 00:19:42.740 ] 00:19:42.740 }' 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.740 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.308 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.309 "name": "raid_bdev1", 00:19:43.309 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:43.309 "strip_size_kb": 0, 00:19:43.309 "state": "online", 00:19:43.309 "raid_level": "raid1", 00:19:43.309 "superblock": true, 00:19:43.309 "num_base_bdevs": 2, 00:19:43.309 "num_base_bdevs_discovered": 1, 00:19:43.309 "num_base_bdevs_operational": 1, 00:19:43.309 "base_bdevs_list": [ 00:19:43.309 { 00:19:43.309 "name": null, 00:19:43.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.309 "is_configured": false, 00:19:43.309 "data_offset": 0, 00:19:43.309 "data_size": 7936 00:19:43.309 }, 00:19:43.309 { 00:19:43.309 "name": "BaseBdev2", 00:19:43.309 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:43.309 "is_configured": true, 00:19:43.309 "data_offset": 256, 00:19:43.309 "data_size": 7936 00:19:43.309 } 00:19:43.309 ] 00:19:43.309 }' 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:43.309 [2024-11-06 12:50:31.861012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.309 [2024-11-06 12:50:31.878719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.309 12:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:43.309 [2024-11-06 12:50:31.881650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.244 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.503 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.503 "name": "raid_bdev1", 00:19:44.503 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:44.503 "strip_size_kb": 0, 00:19:44.503 "state": "online", 00:19:44.503 "raid_level": "raid1", 00:19:44.503 "superblock": true, 00:19:44.503 "num_base_bdevs": 2, 00:19:44.503 "num_base_bdevs_discovered": 2, 00:19:44.503 "num_base_bdevs_operational": 2, 00:19:44.503 "process": { 00:19:44.503 "type": "rebuild", 00:19:44.503 "target": "spare", 00:19:44.503 "progress": { 00:19:44.503 "blocks": 2560, 00:19:44.503 "percent": 32 00:19:44.503 } 00:19:44.503 }, 00:19:44.503 "base_bdevs_list": [ 00:19:44.503 { 00:19:44.503 "name": "spare", 00:19:44.503 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:44.503 "is_configured": true, 00:19:44.503 "data_offset": 256, 00:19:44.503 "data_size": 7936 00:19:44.503 }, 00:19:44.503 { 00:19:44.503 "name": "BaseBdev2", 00:19:44.504 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:44.504 "is_configured": true, 00:19:44.504 "data_offset": 256, 00:19:44.504 "data_size": 7936 00:19:44.504 } 00:19:44.504 ] 00:19:44.504 }' 00:19:44.504 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.504 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.504 12:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:44.504 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=807 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.504 "name": "raid_bdev1", 00:19:44.504 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:44.504 "strip_size_kb": 0, 00:19:44.504 "state": "online", 00:19:44.504 "raid_level": "raid1", 00:19:44.504 "superblock": true, 00:19:44.504 "num_base_bdevs": 2, 00:19:44.504 "num_base_bdevs_discovered": 2, 00:19:44.504 "num_base_bdevs_operational": 2, 00:19:44.504 "process": { 00:19:44.504 "type": "rebuild", 00:19:44.504 "target": "spare", 00:19:44.504 "progress": { 00:19:44.504 "blocks": 2816, 00:19:44.504 "percent": 35 00:19:44.504 } 00:19:44.504 }, 00:19:44.504 "base_bdevs_list": [ 00:19:44.504 { 00:19:44.504 "name": "spare", 00:19:44.504 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:44.504 "is_configured": true, 00:19:44.504 "data_offset": 256, 00:19:44.504 "data_size": 7936 00:19:44.504 }, 00:19:44.504 { 00:19:44.504 "name": "BaseBdev2", 00:19:44.504 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:44.504 "is_configured": true, 00:19:44.504 "data_offset": 256, 00:19:44.504 "data_size": 7936 00:19:44.504 } 00:19:44.504 ] 00:19:44.504 }' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.504 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.762 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.762 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.762 12:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.698 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.699 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:45.699 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.699 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.699 "name": "raid_bdev1", 00:19:45.699 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:45.699 "strip_size_kb": 0, 00:19:45.699 "state": "online", 00:19:45.699 "raid_level": "raid1", 00:19:45.699 "superblock": true, 00:19:45.699 "num_base_bdevs": 2, 00:19:45.699 "num_base_bdevs_discovered": 2, 00:19:45.699 "num_base_bdevs_operational": 2, 00:19:45.699 "process": { 00:19:45.699 "type": "rebuild", 00:19:45.699 "target": "spare", 00:19:45.699 "progress": { 00:19:45.699 "blocks": 5888, 00:19:45.699 "percent": 74 00:19:45.699 } 00:19:45.699 }, 00:19:45.699 "base_bdevs_list": [ 00:19:45.699 { 00:19:45.699 "name": "spare", 00:19:45.699 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:45.699 "is_configured": true, 00:19:45.699 "data_offset": 256, 00:19:45.699 "data_size": 7936 00:19:45.699 }, 00:19:45.699 { 00:19:45.699 "name": "BaseBdev2", 00:19:45.699 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:45.699 "is_configured": true, 00:19:45.699 "data_offset": 256, 00:19:45.699 "data_size": 7936 00:19:45.699 } 00:19:45.699 ] 00:19:45.699 }' 00:19:45.699 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.699 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.699 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.957 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.957 12:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.524 [2024-11-06 12:50:35.010692] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.524 [2024-11-06 12:50:35.011035] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.524 [2024-11-06 12:50:35.011233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.782 "name": "raid_bdev1", 00:19:46.782 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:46.782 "strip_size_kb": 0, 00:19:46.782 "state": "online", 00:19:46.782 "raid_level": "raid1", 00:19:46.782 "superblock": true, 00:19:46.782 "num_base_bdevs": 2, 00:19:46.782 "num_base_bdevs_discovered": 2, 00:19:46.782 "num_base_bdevs_operational": 2, 00:19:46.782 "base_bdevs_list": [ 00:19:46.782 { 00:19:46.782 "name": "spare", 00:19:46.782 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:46.782 "is_configured": true, 00:19:46.782 "data_offset": 256, 00:19:46.782 "data_size": 7936 00:19:46.782 }, 00:19:46.782 { 00:19:46.782 "name": "BaseBdev2", 00:19:46.782 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:46.782 "is_configured": true, 00:19:46.782 "data_offset": 256, 00:19:46.782 "data_size": 7936 00:19:46.782 } 00:19:46.782 ] 00:19:46.782 }' 00:19:46.782 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.041 "name": "raid_bdev1", 00:19:47.041 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:47.041 "strip_size_kb": 0, 00:19:47.041 "state": "online", 00:19:47.041 "raid_level": "raid1", 00:19:47.041 "superblock": true, 00:19:47.041 "num_base_bdevs": 2, 00:19:47.041 "num_base_bdevs_discovered": 2, 00:19:47.041 "num_base_bdevs_operational": 2, 00:19:47.041 "base_bdevs_list": [ 00:19:47.041 { 00:19:47.041 "name": "spare", 00:19:47.041 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:47.041 "is_configured": true, 00:19:47.041 "data_offset": 256, 00:19:47.041 "data_size": 7936 00:19:47.041 }, 00:19:47.041 { 00:19:47.041 "name": "BaseBdev2", 00:19:47.041 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:47.041 "is_configured": true, 00:19:47.041 "data_offset": 256, 00:19:47.041 "data_size": 7936 00:19:47.041 } 00:19:47.041 ] 00:19:47.041 }' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.041 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.300 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.300 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.300 "name": "raid_bdev1", 00:19:47.300 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:47.300 "strip_size_kb": 0, 00:19:47.300 "state": "online", 00:19:47.300 "raid_level": "raid1", 00:19:47.300 "superblock": true, 00:19:47.300 "num_base_bdevs": 2, 00:19:47.300 "num_base_bdevs_discovered": 2, 00:19:47.300 "num_base_bdevs_operational": 2, 00:19:47.300 "base_bdevs_list": [ 00:19:47.300 { 00:19:47.300 "name": "spare", 00:19:47.300 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:47.300 "is_configured": true, 00:19:47.300 "data_offset": 256, 00:19:47.300 "data_size": 7936 00:19:47.300 }, 00:19:47.300 { 00:19:47.300 "name": "BaseBdev2", 00:19:47.300 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:47.300 "is_configured": true, 00:19:47.300 "data_offset": 256, 00:19:47.300 "data_size": 7936 00:19:47.300 } 00:19:47.300 ] 00:19:47.300 }' 00:19:47.300 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.300 12:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.559 [2024-11-06 12:50:36.201278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.559 [2024-11-06 12:50:36.201465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.559 [2024-11-06 12:50:36.201639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.559 [2024-11-06 12:50:36.201752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.559 [2024-11-06 12:50:36.201771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.559 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.817 [2024-11-06 12:50:36.273253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:47.817 [2024-11-06 12:50:36.273321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.817 [2024-11-06 12:50:36.273358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:47.817 [2024-11-06 12:50:36.273373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.817 [2024-11-06 12:50:36.276188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.817 [2024-11-06 12:50:36.276391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:47.817 [2024-11-06 12:50:36.276493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:47.817 [2024-11-06 12:50:36.276564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.817 [2024-11-06 12:50:36.276718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.817 spare 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.817 [2024-11-06 12:50:36.376846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:47.817 [2024-11-06 12:50:36.377009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:47.817 [2024-11-06 12:50:36.377147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:47.817 [2024-11-06 12:50:36.377307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:47.817 [2024-11-06 12:50:36.377327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:47.817 [2024-11-06 12:50:36.377477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.817 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.818 "name": "raid_bdev1", 00:19:47.818 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:47.818 "strip_size_kb": 0, 00:19:47.818 "state": "online", 00:19:47.818 "raid_level": "raid1", 00:19:47.818 "superblock": true, 00:19:47.818 "num_base_bdevs": 2, 00:19:47.818 "num_base_bdevs_discovered": 2, 00:19:47.818 "num_base_bdevs_operational": 2, 00:19:47.818 "base_bdevs_list": [ 00:19:47.818 { 00:19:47.818 "name": "spare", 00:19:47.818 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:47.818 "is_configured": true, 00:19:47.818 "data_offset": 256, 00:19:47.818 "data_size": 7936 00:19:47.818 }, 00:19:47.818 { 00:19:47.818 "name": "BaseBdev2", 00:19:47.818 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:47.818 "is_configured": true, 00:19:47.818 "data_offset": 256, 00:19:47.818 "data_size": 7936 00:19:47.818 } 00:19:47.818 ] 00:19:47.818 }' 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.818 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.385 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.385 "name": "raid_bdev1", 00:19:48.385 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:48.385 "strip_size_kb": 0, 00:19:48.385 "state": "online", 00:19:48.385 "raid_level": "raid1", 00:19:48.385 "superblock": true, 00:19:48.385 "num_base_bdevs": 2, 00:19:48.385 "num_base_bdevs_discovered": 2, 00:19:48.385 "num_base_bdevs_operational": 2, 00:19:48.385 "base_bdevs_list": [ 00:19:48.385 { 00:19:48.385 "name": "spare", 00:19:48.385 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:48.385 "is_configured": true, 00:19:48.385 "data_offset": 256, 00:19:48.385 "data_size": 7936 00:19:48.385 }, 00:19:48.385 { 00:19:48.385 "name": "BaseBdev2", 00:19:48.385 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:48.385 "is_configured": true, 00:19:48.385 "data_offset": 256, 00:19:48.385 "data_size": 7936 00:19:48.385 } 00:19:48.385 ] 00:19:48.385 }' 00:19:48.386 12:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.386 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.386 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.644 [2024-11-06 12:50:37.118079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.644 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.644 "name": "raid_bdev1", 00:19:48.644 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:48.644 "strip_size_kb": 0, 00:19:48.644 "state": "online", 00:19:48.644 "raid_level": "raid1", 00:19:48.644 "superblock": true, 00:19:48.644 "num_base_bdevs": 2, 00:19:48.644 "num_base_bdevs_discovered": 1, 00:19:48.644 "num_base_bdevs_operational": 1, 00:19:48.644 "base_bdevs_list": [ 00:19:48.644 { 00:19:48.644 "name": null, 00:19:48.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.644 "is_configured": false, 00:19:48.644 "data_offset": 0, 00:19:48.644 "data_size": 7936 00:19:48.644 }, 00:19:48.644 { 00:19:48.645 "name": "BaseBdev2", 00:19:48.645 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:48.645 "is_configured": true, 00:19:48.645 "data_offset": 256, 00:19:48.645 "data_size": 7936 00:19:48.645 } 00:19:48.645 ] 00:19:48.645 }' 00:19:48.645 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.645 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.212 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.212 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.212 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:49.212 [2024-11-06 12:50:37.634330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.212 [2024-11-06 12:50:37.634685] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:49.212 [2024-11-06 12:50:37.634713] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:49.212 [2024-11-06 12:50:37.634784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.212 [2024-11-06 12:50:37.652440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:49.212 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.212 12:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:49.212 [2024-11-06 12:50:37.655190] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.167 "name": "raid_bdev1", 00:19:50.167 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:50.167 "strip_size_kb": 0, 00:19:50.167 "state": "online", 00:19:50.167 "raid_level": "raid1", 00:19:50.167 "superblock": true, 00:19:50.167 "num_base_bdevs": 2, 00:19:50.167 "num_base_bdevs_discovered": 2, 00:19:50.167 "num_base_bdevs_operational": 2, 00:19:50.167 "process": { 00:19:50.167 "type": "rebuild", 00:19:50.167 "target": "spare", 00:19:50.167 "progress": { 00:19:50.167 "blocks": 2560, 00:19:50.167 "percent": 32 00:19:50.167 } 00:19:50.167 }, 00:19:50.167 "base_bdevs_list": [ 00:19:50.167 { 00:19:50.167 "name": "spare", 00:19:50.167 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:50.167 "is_configured": true, 00:19:50.167 "data_offset": 256, 00:19:50.167 "data_size": 7936 00:19:50.167 }, 00:19:50.167 { 00:19:50.167 "name": "BaseBdev2", 00:19:50.167 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:50.167 "is_configured": true, 00:19:50.167 "data_offset": 256, 00:19:50.167 "data_size": 7936 00:19:50.167 } 00:19:50.167 ] 00:19:50.167 }' 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.167 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.426 [2024-11-06 12:50:38.828988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.426 [2024-11-06 12:50:38.866465] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:50.426 [2024-11-06 12:50:38.866556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.426 [2024-11-06 12:50:38.866583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.426 [2024-11-06 12:50:38.866599] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.426 "name": "raid_bdev1", 00:19:50.426 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:50.426 "strip_size_kb": 0, 00:19:50.426 "state": "online", 00:19:50.426 "raid_level": "raid1", 00:19:50.426 "superblock": true, 00:19:50.426 "num_base_bdevs": 2, 00:19:50.426 "num_base_bdevs_discovered": 1, 00:19:50.426 "num_base_bdevs_operational": 1, 00:19:50.426 "base_bdevs_list": [ 00:19:50.426 { 00:19:50.426 "name": null, 00:19:50.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.426 "is_configured": false, 00:19:50.426 "data_offset": 0, 00:19:50.426 "data_size": 7936 00:19:50.426 }, 00:19:50.426 { 00:19:50.426 "name": "BaseBdev2", 00:19:50.426 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:50.426 "is_configured": true, 00:19:50.426 "data_offset": 256, 00:19:50.426 "data_size": 7936 00:19:50.426 } 00:19:50.426 ] 00:19:50.426 }' 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.426 12:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.994 12:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:50.994 12:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.994 12:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:50.994 [2024-11-06 12:50:39.439115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:50.994 [2024-11-06 12:50:39.439264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.994 [2024-11-06 12:50:39.439316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:50.994 [2024-11-06 12:50:39.439336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.994 [2024-11-06 12:50:39.439674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.994 [2024-11-06 12:50:39.439723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:50.994 [2024-11-06 12:50:39.439817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:50.994 [2024-11-06 12:50:39.439860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:50.994 [2024-11-06 12:50:39.439875] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:50.994 [2024-11-06 12:50:39.439929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.994 [2024-11-06 12:50:39.457985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:50.994 spare 00:19:50.994 12:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.994 12:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:50.994 [2024-11-06 12:50:39.460947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.929 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.929 "name": "raid_bdev1", 00:19:51.929 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:51.929 "strip_size_kb": 0, 00:19:51.929 "state": "online", 00:19:51.929 "raid_level": "raid1", 00:19:51.929 "superblock": true, 00:19:51.929 "num_base_bdevs": 2, 00:19:51.929 "num_base_bdevs_discovered": 2, 00:19:51.929 "num_base_bdevs_operational": 2, 00:19:51.929 "process": { 00:19:51.929 "type": "rebuild", 00:19:51.929 "target": "spare", 00:19:51.929 "progress": { 00:19:51.929 "blocks": 2560, 00:19:51.929 "percent": 32 00:19:51.929 } 00:19:51.929 }, 00:19:51.929 "base_bdevs_list": [ 00:19:51.929 { 00:19:51.929 "name": "spare", 00:19:51.929 "uuid": "08fe4ed5-d618-5514-9dbb-38994eb64d7d", 00:19:51.930 "is_configured": true, 00:19:51.930 "data_offset": 256, 00:19:51.930 "data_size": 7936 00:19:51.930 }, 00:19:51.930 { 00:19:51.930 "name": "BaseBdev2", 00:19:51.930 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:51.930 "is_configured": true, 00:19:51.930 "data_offset": 256, 00:19:51.930 "data_size": 7936 00:19:51.930 } 00:19:51.930 ] 00:19:51.930 }' 00:19:51.930 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.930 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.930 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.189 [2024-11-06 12:50:40.630976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.189 [2024-11-06 12:50:40.671795] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.189 [2024-11-06 12:50:40.671902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.189 [2024-11-06 12:50:40.671931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.189 [2024-11-06 12:50:40.671942] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.189 "name": "raid_bdev1", 00:19:52.189 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:52.189 "strip_size_kb": 0, 00:19:52.189 "state": "online", 00:19:52.189 "raid_level": "raid1", 00:19:52.189 "superblock": true, 00:19:52.189 "num_base_bdevs": 2, 00:19:52.189 "num_base_bdevs_discovered": 1, 00:19:52.189 "num_base_bdevs_operational": 1, 00:19:52.189 "base_bdevs_list": [ 00:19:52.189 { 00:19:52.189 "name": null, 00:19:52.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.189 "is_configured": false, 00:19:52.189 "data_offset": 0, 00:19:52.189 "data_size": 7936 00:19:52.189 }, 00:19:52.189 { 00:19:52.189 "name": "BaseBdev2", 00:19:52.189 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:52.189 "is_configured": true, 00:19:52.189 "data_offset": 256, 00:19:52.189 "data_size": 7936 00:19:52.189 } 00:19:52.189 ] 00:19:52.189 }' 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.189 12:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.756 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.756 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.756 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.756 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.757 "name": "raid_bdev1", 00:19:52.757 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:52.757 "strip_size_kb": 0, 00:19:52.757 "state": "online", 00:19:52.757 "raid_level": "raid1", 00:19:52.757 "superblock": true, 00:19:52.757 "num_base_bdevs": 2, 00:19:52.757 "num_base_bdevs_discovered": 1, 00:19:52.757 "num_base_bdevs_operational": 1, 00:19:52.757 "base_bdevs_list": [ 00:19:52.757 { 00:19:52.757 "name": null, 00:19:52.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.757 "is_configured": false, 00:19:52.757 "data_offset": 0, 00:19:52.757 "data_size": 7936 00:19:52.757 }, 00:19:52.757 { 00:19:52.757 "name": "BaseBdev2", 00:19:52.757 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:52.757 "is_configured": true, 00:19:52.757 "data_offset": 256, 00:19:52.757 "data_size": 7936 00:19:52.757 } 00:19:52.757 ] 00:19:52.757 }' 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.757 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:52.757 [2024-11-06 12:50:41.408984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:52.757 [2024-11-06 12:50:41.409076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.757 [2024-11-06 12:50:41.409115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:52.757 [2024-11-06 12:50:41.409130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.757 [2024-11-06 12:50:41.409449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.757 [2024-11-06 12:50:41.409474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:52.757 [2024-11-06 12:50:41.409550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:52.757 [2024-11-06 12:50:41.409578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:52.757 [2024-11-06 12:50:41.409595] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:52.757 [2024-11-06 12:50:41.409609] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:53.016 BaseBdev1 00:19:53.016 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.016 12:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.952 "name": "raid_bdev1", 00:19:53.952 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:53.952 "strip_size_kb": 0, 00:19:53.952 "state": "online", 00:19:53.952 "raid_level": "raid1", 00:19:53.952 "superblock": true, 00:19:53.952 "num_base_bdevs": 2, 00:19:53.952 "num_base_bdevs_discovered": 1, 00:19:53.952 "num_base_bdevs_operational": 1, 00:19:53.952 "base_bdevs_list": [ 00:19:53.952 { 00:19:53.952 "name": null, 00:19:53.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.952 "is_configured": false, 00:19:53.952 "data_offset": 0, 00:19:53.952 "data_size": 7936 00:19:53.952 }, 00:19:53.952 { 00:19:53.952 "name": "BaseBdev2", 00:19:53.952 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:53.952 "is_configured": true, 00:19:53.952 "data_offset": 256, 00:19:53.952 "data_size": 7936 00:19:53.952 } 00:19:53.952 ] 00:19:53.952 }' 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.952 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.520 "name": "raid_bdev1", 00:19:54.520 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:54.520 "strip_size_kb": 0, 00:19:54.520 "state": "online", 00:19:54.520 "raid_level": "raid1", 00:19:54.520 "superblock": true, 00:19:54.520 "num_base_bdevs": 2, 00:19:54.520 "num_base_bdevs_discovered": 1, 00:19:54.520 "num_base_bdevs_operational": 1, 00:19:54.520 "base_bdevs_list": [ 00:19:54.520 { 00:19:54.520 "name": null, 00:19:54.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.520 "is_configured": false, 00:19:54.520 "data_offset": 0, 00:19:54.520 "data_size": 7936 00:19:54.520 }, 00:19:54.520 { 00:19:54.520 "name": "BaseBdev2", 00:19:54.520 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:54.520 "is_configured": true, 00:19:54.520 "data_offset": 256, 00:19:54.520 "data_size": 7936 00:19:54.520 } 00:19:54.520 ] 00:19:54.520 }' 00:19:54.520 12:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.520 [2024-11-06 12:50:43.090179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.520 [2024-11-06 12:50:43.090506] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:54.520 [2024-11-06 12:50:43.090545] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:54.520 request: 00:19:54.520 { 00:19:54.520 "base_bdev": "BaseBdev1", 00:19:54.520 "raid_bdev": "raid_bdev1", 00:19:54.520 "method": "bdev_raid_add_base_bdev", 00:19:54.520 "req_id": 1 00:19:54.520 } 00:19:54.520 Got JSON-RPC error response 00:19:54.520 response: 00:19:54.520 { 00:19:54.520 "code": -22, 00:19:54.520 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:54.520 } 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.520 12:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.456 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.714 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.714 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.714 "name": "raid_bdev1", 00:19:55.714 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:55.714 "strip_size_kb": 0, 00:19:55.714 "state": "online", 00:19:55.714 "raid_level": "raid1", 00:19:55.714 "superblock": true, 00:19:55.714 "num_base_bdevs": 2, 00:19:55.714 "num_base_bdevs_discovered": 1, 00:19:55.714 "num_base_bdevs_operational": 1, 00:19:55.714 "base_bdevs_list": [ 00:19:55.714 { 00:19:55.714 "name": null, 00:19:55.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.714 "is_configured": false, 00:19:55.714 "data_offset": 0, 00:19:55.714 "data_size": 7936 00:19:55.714 }, 00:19:55.714 { 00:19:55.714 "name": "BaseBdev2", 00:19:55.714 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:55.714 "is_configured": true, 00:19:55.714 "data_offset": 256, 00:19:55.714 "data_size": 7936 00:19:55.714 } 00:19:55.714 ] 00:19:55.714 }' 00:19:55.714 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.714 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.281 "name": "raid_bdev1", 00:19:56.281 "uuid": "40b02882-a795-4175-9c66-02b76bca52f5", 00:19:56.281 "strip_size_kb": 0, 00:19:56.281 "state": "online", 00:19:56.281 "raid_level": "raid1", 00:19:56.281 "superblock": true, 00:19:56.281 "num_base_bdevs": 2, 00:19:56.281 "num_base_bdevs_discovered": 1, 00:19:56.281 "num_base_bdevs_operational": 1, 00:19:56.281 "base_bdevs_list": [ 00:19:56.281 { 00:19:56.281 "name": null, 00:19:56.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.281 "is_configured": false, 00:19:56.281 "data_offset": 0, 00:19:56.281 "data_size": 7936 00:19:56.281 }, 00:19:56.281 { 00:19:56.281 "name": "BaseBdev2", 00:19:56.281 "uuid": "3385669f-a12c-52fe-a63e-4f71f099b7da", 00:19:56.281 "is_configured": true, 00:19:56.281 "data_offset": 256, 00:19:56.281 "data_size": 7936 00:19:56.281 } 00:19:56.281 ] 00:19:56.281 }' 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.281 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89609 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89609 ']' 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89609 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89609 00:19:56.282 killing process with pid 89609 00:19:56.282 Received shutdown signal, test time was about 60.000000 seconds 00:19:56.282 00:19:56.282 Latency(us) 00:19:56.282 [2024-11-06T12:50:44.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.282 [2024-11-06T12:50:44.939Z] =================================================================================================================== 00:19:56.282 [2024-11-06T12:50:44.939Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89609' 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89609 00:19:56.282 [2024-11-06 12:50:44.821024] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:56.282 12:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89609 00:19:56.282 [2024-11-06 12:50:44.821248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.282 [2024-11-06 12:50:44.821374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.282 [2024-11-06 12:50:44.821395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:56.539 [2024-11-06 12:50:45.114761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:57.926 12:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:57.926 00:19:57.926 real 0m18.726s 00:19:57.926 user 0m25.413s 00:19:57.926 sys 0m1.539s 00:19:57.926 12:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.926 12:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.926 ************************************ 00:19:57.926 END TEST raid_rebuild_test_sb_md_interleaved 00:19:57.926 ************************************ 00:19:57.926 12:50:46 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:57.926 12:50:46 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:57.926 12:50:46 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89609 ']' 00:19:57.926 12:50:46 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89609 00:19:57.926 12:50:46 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:57.926 00:19:57.926 real 13m10.150s 00:19:57.926 user 18m30.715s 00:19:57.926 sys 1m50.309s 00:19:57.926 12:50:46 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.926 12:50:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:57.926 ************************************ 00:19:57.926 END TEST bdev_raid 00:19:57.926 ************************************ 00:19:57.926 12:50:46 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:57.926 12:50:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:57.926 12:50:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:57.926 12:50:46 -- common/autotest_common.sh@10 -- # set +x 00:19:57.926 ************************************ 00:19:57.926 START TEST spdkcli_raid 00:19:57.926 ************************************ 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:57.926 * Looking for test storage... 00:19:57.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.926 12:50:46 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:57.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.926 --rc genhtml_branch_coverage=1 00:19:57.926 --rc genhtml_function_coverage=1 00:19:57.926 --rc genhtml_legend=1 00:19:57.926 --rc geninfo_all_blocks=1 00:19:57.926 --rc geninfo_unexecuted_blocks=1 00:19:57.926 00:19:57.926 ' 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:57.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.926 --rc genhtml_branch_coverage=1 00:19:57.926 --rc genhtml_function_coverage=1 00:19:57.926 --rc genhtml_legend=1 00:19:57.926 --rc geninfo_all_blocks=1 00:19:57.926 --rc geninfo_unexecuted_blocks=1 00:19:57.926 00:19:57.926 ' 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:57.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.926 --rc genhtml_branch_coverage=1 00:19:57.926 --rc genhtml_function_coverage=1 00:19:57.926 --rc genhtml_legend=1 00:19:57.926 --rc geninfo_all_blocks=1 00:19:57.926 --rc geninfo_unexecuted_blocks=1 00:19:57.926 00:19:57.926 ' 00:19:57.926 12:50:46 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:57.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.926 --rc genhtml_branch_coverage=1 00:19:57.926 --rc genhtml_function_coverage=1 00:19:57.926 --rc genhtml_legend=1 00:19:57.926 --rc geninfo_all_blocks=1 00:19:57.926 --rc geninfo_unexecuted_blocks=1 00:19:57.926 00:19:57.926 ' 00:19:57.926 12:50:46 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:57.926 12:50:46 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:57.926 12:50:46 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:57.926 12:50:46 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:57.926 12:50:46 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:57.926 12:50:46 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:57.926 12:50:46 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:57.927 12:50:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:57.927 12:50:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90296 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90296 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90296 ']' 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.220 12:50:46 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.220 12:50:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.220 [2024-11-06 12:50:46.697678] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:19:58.220 [2024-11-06 12:50:46.697852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90296 ] 00:19:58.478 [2024-11-06 12:50:46.882430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:58.478 [2024-11-06 12:50:47.056911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.478 [2024-11-06 12:50:47.056921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.413 12:50:48 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:59.413 12:50:48 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:19:59.413 12:50:48 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:59.413 12:50:48 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.413 12:50:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:59.671 12:50:48 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:59.671 12:50:48 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.671 12:50:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:59.671 12:50:48 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:59.671 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:59.671 ' 00:20:01.044 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:01.044 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:01.302 12:50:49 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:01.302 12:50:49 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.302 12:50:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.302 12:50:49 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:01.302 12:50:49 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.302 12:50:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.302 12:50:49 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:01.302 ' 00:20:02.676 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:02.676 12:50:51 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:02.676 12:50:51 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.676 12:50:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.676 12:50:51 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:02.676 12:50:51 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:02.676 12:50:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.676 12:50:51 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:02.676 12:50:51 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:03.241 12:50:51 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:03.241 12:50:51 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:03.241 12:50:51 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:03.241 12:50:51 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:03.241 12:50:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.241 12:50:51 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:03.241 12:50:51 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.241 12:50:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.241 12:50:51 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:03.241 ' 00:20:04.206 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:04.464 12:50:52 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:04.464 12:50:52 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.464 12:50:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.464 12:50:52 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:04.464 12:50:52 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.464 12:50:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.464 12:50:52 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:04.464 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:04.464 ' 00:20:05.836 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:05.836 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:05.836 12:50:54 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:05.836 12:50:54 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.836 12:50:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.094 12:50:54 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90296 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90296 ']' 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90296 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90296 00:20:06.094 killing process with pid 90296 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90296' 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90296 00:20:06.094 12:50:54 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90296 00:20:08.632 Process with pid 90296 is not found 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90296 ']' 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90296 00:20:08.632 12:50:56 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90296 ']' 00:20:08.632 12:50:56 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90296 00:20:08.632 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90296) - No such process 00:20:08.632 12:50:56 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90296 is not found' 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:08.632 12:50:56 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:08.632 ************************************ 00:20:08.632 END TEST spdkcli_raid 00:20:08.632 ************************************ 00:20:08.632 00:20:08.632 real 0m10.621s 00:20:08.632 user 0m21.801s 00:20:08.632 sys 0m1.373s 00:20:08.632 12:50:56 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:08.632 12:50:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.632 12:50:57 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:08.632 12:50:57 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:08.632 12:50:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:08.632 12:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:08.632 ************************************ 00:20:08.632 START TEST blockdev_raid5f 00:20:08.632 ************************************ 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:08.632 * Looking for test storage... 00:20:08.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.632 12:50:57 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.632 --rc genhtml_branch_coverage=1 00:20:08.632 --rc genhtml_function_coverage=1 00:20:08.632 --rc genhtml_legend=1 00:20:08.632 --rc geninfo_all_blocks=1 00:20:08.632 --rc geninfo_unexecuted_blocks=1 00:20:08.632 00:20:08.632 ' 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.632 --rc genhtml_branch_coverage=1 00:20:08.632 --rc genhtml_function_coverage=1 00:20:08.632 --rc genhtml_legend=1 00:20:08.632 --rc geninfo_all_blocks=1 00:20:08.632 --rc geninfo_unexecuted_blocks=1 00:20:08.632 00:20:08.632 ' 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.632 --rc genhtml_branch_coverage=1 00:20:08.632 --rc genhtml_function_coverage=1 00:20:08.632 --rc genhtml_legend=1 00:20:08.632 --rc geninfo_all_blocks=1 00:20:08.632 --rc geninfo_unexecuted_blocks=1 00:20:08.632 00:20:08.632 ' 00:20:08.632 12:50:57 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:08.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.632 --rc genhtml_branch_coverage=1 00:20:08.632 --rc genhtml_function_coverage=1 00:20:08.632 --rc genhtml_legend=1 00:20:08.632 --rc geninfo_all_blocks=1 00:20:08.632 --rc geninfo_unexecuted_blocks=1 00:20:08.632 00:20:08.632 ' 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:08.632 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:08.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90576 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:08.633 12:50:57 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90576 00:20:08.633 12:50:57 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90576 ']' 00:20:08.633 12:50:57 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.633 12:50:57 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.633 12:50:57 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.633 12:50:57 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.633 12:50:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:08.891 [2024-11-06 12:50:57.381671] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:08.891 [2024-11-06 12:50:57.382363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90576 ] 00:20:09.149 [2024-11-06 12:50:57.578984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.149 [2024-11-06 12:50:57.750436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.085 12:50:58 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.085 12:50:58 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:20:10.085 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:10.085 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:10.085 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:10.085 12:50:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.085 12:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 Malloc0 00:20:10.344 Malloc1 00:20:10.344 Malloc2 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.344 12:50:58 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:10.344 12:50:58 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5dcd8c06-6a6c-42eb-9d26-2f8fcb67e706"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5dcd8c06-6a6c-42eb-9d26-2f8fcb67e706",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5dcd8c06-6a6c-42eb-9d26-2f8fcb67e706",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "190b6dbd-2a92-4e74-858b-c9fbf0fbe6ea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "18bdb342-db33-4519-9424-3d216556d8c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fd5544e7-4b68-4963-bcc4-6eb9caafb84f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:10.602 12:50:59 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:10.602 12:50:59 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:10.602 12:50:59 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:10.602 12:50:59 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90576 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90576 ']' 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90576 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90576 00:20:10.602 killing process with pid 90576 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90576' 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90576 00:20:10.602 12:50:59 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90576 00:20:13.170 12:51:01 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:13.170 12:51:01 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:13.170 12:51:01 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:13.170 12:51:01 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:13.170 12:51:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:13.170 ************************************ 00:20:13.170 START TEST bdev_hello_world 00:20:13.170 ************************************ 00:20:13.170 12:51:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:13.429 [2024-11-06 12:51:01.830043] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:13.429 [2024-11-06 12:51:01.830284] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90638 ] 00:20:13.429 [2024-11-06 12:51:02.015787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.688 [2024-11-06 12:51:02.163364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.255 [2024-11-06 12:51:02.738680] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:14.255 [2024-11-06 12:51:02.738774] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:14.255 [2024-11-06 12:51:02.738815] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:14.255 [2024-11-06 12:51:02.739480] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:14.255 [2024-11-06 12:51:02.739658] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:14.255 [2024-11-06 12:51:02.739686] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:14.255 [2024-11-06 12:51:02.739811] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:14.255 00:20:14.255 [2024-11-06 12:51:02.739838] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:15.632 00:20:15.632 real 0m2.379s 00:20:15.632 user 0m1.868s 00:20:15.632 sys 0m0.381s 00:20:15.632 ************************************ 00:20:15.632 END TEST bdev_hello_world 00:20:15.632 ************************************ 00:20:15.632 12:51:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:15.632 12:51:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:15.632 12:51:04 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:15.632 12:51:04 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:15.632 12:51:04 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:15.632 12:51:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:15.632 ************************************ 00:20:15.632 START TEST bdev_bounds 00:20:15.632 ************************************ 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90686 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90686' 00:20:15.632 Process bdevio pid: 90686 00:20:15.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90686 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90686 ']' 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.632 12:51:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:15.632 [2024-11-06 12:51:04.243111] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:15.632 [2024-11-06 12:51:04.243608] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90686 ] 00:20:15.890 [2024-11-06 12:51:04.426638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:16.154 [2024-11-06 12:51:04.580736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.154 [2024-11-06 12:51:04.580847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.154 [2024-11-06 12:51:04.580848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.721 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.721 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:20:16.721 12:51:05 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:16.979 I/O targets: 00:20:16.979 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:16.979 00:20:16.979 00:20:16.979 CUnit - A unit testing framework for C - Version 2.1-3 00:20:16.979 http://cunit.sourceforge.net/ 00:20:16.979 00:20:16.979 00:20:16.979 Suite: bdevio tests on: raid5f 00:20:16.979 Test: blockdev write read block ...passed 00:20:16.979 Test: blockdev write zeroes read block ...passed 00:20:16.979 Test: blockdev write zeroes read no split ...passed 00:20:16.979 Test: blockdev write zeroes read split ...passed 00:20:17.238 Test: blockdev write zeroes read split partial ...passed 00:20:17.238 Test: blockdev reset ...passed 00:20:17.238 Test: blockdev write read 8 blocks ...passed 00:20:17.238 Test: blockdev write read size > 128k ...passed 00:20:17.238 Test: blockdev write read invalid size ...passed 00:20:17.238 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:17.238 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:17.238 Test: blockdev write read max offset ...passed 00:20:17.238 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:17.238 Test: blockdev writev readv 8 blocks ...passed 00:20:17.238 Test: blockdev writev readv 30 x 1block ...passed 00:20:17.238 Test: blockdev writev readv block ...passed 00:20:17.238 Test: blockdev writev readv size > 128k ...passed 00:20:17.238 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:17.238 Test: blockdev comparev and writev ...passed 00:20:17.238 Test: blockdev nvme passthru rw ...passed 00:20:17.238 Test: blockdev nvme passthru vendor specific ...passed 00:20:17.238 Test: blockdev nvme admin passthru ...passed 00:20:17.238 Test: blockdev copy ...passed 00:20:17.238 00:20:17.238 Run Summary: Type Total Ran Passed Failed Inactive 00:20:17.238 suites 1 1 n/a 0 0 00:20:17.238 tests 23 23 23 0 0 00:20:17.238 asserts 130 130 130 0 n/a 00:20:17.238 00:20:17.238 Elapsed time = 0.591 seconds 00:20:17.238 0 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90686 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90686 ']' 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90686 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90686 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90686' 00:20:17.238 killing process with pid 90686 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90686 00:20:17.238 12:51:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90686 00:20:18.614 12:51:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:18.614 00:20:18.614 real 0m2.931s 00:20:18.614 user 0m7.290s 00:20:18.614 sys 0m0.500s 00:20:18.614 12:51:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:18.614 12:51:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:18.614 ************************************ 00:20:18.614 END TEST bdev_bounds 00:20:18.614 ************************************ 00:20:18.614 12:51:07 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:18.614 12:51:07 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:18.614 12:51:07 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:18.614 12:51:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:18.614 ************************************ 00:20:18.614 START TEST bdev_nbd 00:20:18.614 ************************************ 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90751 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90751 /var/tmp/spdk-nbd.sock 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90751 ']' 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:18.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:18.614 12:51:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:18.614 [2024-11-06 12:51:07.250142] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:18.614 [2024-11-06 12:51:07.250650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.873 [2024-11-06 12:51:07.440876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.132 [2024-11-06 12:51:07.581756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:19.700 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.959 1+0 records in 00:20:19.959 1+0 records out 00:20:19.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366059 s, 11.2 MB/s 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:19.959 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:20.218 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:20.218 { 00:20:20.218 "nbd_device": "/dev/nbd0", 00:20:20.218 "bdev_name": "raid5f" 00:20:20.218 } 00:20:20.218 ]' 00:20:20.218 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:20.218 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:20.218 { 00:20:20.218 "nbd_device": "/dev/nbd0", 00:20:20.218 "bdev_name": "raid5f" 00:20:20.218 } 00:20:20.218 ]' 00:20:20.218 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:20.476 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:20.476 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:20.476 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:20.477 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:20.477 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:20.477 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.477 12:51:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:20.735 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:20.994 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:21.252 /dev/nbd0 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:21.252 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:21.511 1+0 records in 00:20:21.511 1+0 records out 00:20:21.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471007 s, 8.7 MB/s 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:21.511 12:51:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:21.769 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:21.769 { 00:20:21.769 "nbd_device": "/dev/nbd0", 00:20:21.769 "bdev_name": "raid5f" 00:20:21.769 } 00:20:21.769 ]' 00:20:21.769 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:21.769 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:21.769 { 00:20:21.769 "nbd_device": "/dev/nbd0", 00:20:21.769 "bdev_name": "raid5f" 00:20:21.769 } 00:20:21.769 ]' 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:21.770 256+0 records in 00:20:21.770 256+0 records out 00:20:21.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00963142 s, 109 MB/s 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:21.770 256+0 records in 00:20:21.770 256+0 records out 00:20:21.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0394153 s, 26.6 MB/s 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.770 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.028 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:22.597 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:22.597 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:22.597 12:51:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:22.597 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:22.855 malloc_lvol_verify 00:20:22.855 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:23.114 bccc217f-12bc-459d-abb2-90ab13151329 00:20:23.114 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:23.372 7822619d-34a7-4334-9074-c3a1f277ef5c 00:20:23.372 12:51:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:23.631 /dev/nbd0 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:23.631 mke2fs 1.47.0 (5-Feb-2023) 00:20:23.631 Discarding device blocks: 0/4096 done 00:20:23.631 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:23.631 00:20:23.631 Allocating group tables: 0/1 done 00:20:23.631 Writing inode tables: 0/1 done 00:20:23.631 Creating journal (1024 blocks): done 00:20:23.631 Writing superblocks and filesystem accounting information: 0/1 done 00:20:23.631 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.631 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90751 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90751 ']' 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90751 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90751 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90751' 00:20:23.890 killing process with pid 90751 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90751 00:20:23.890 12:51:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90751 00:20:25.793 12:51:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:25.793 00:20:25.793 real 0m6.823s 00:20:25.793 user 0m9.843s 00:20:25.793 sys 0m1.480s 00:20:25.793 12:51:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:25.793 12:51:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:25.793 ************************************ 00:20:25.793 END TEST bdev_nbd 00:20:25.793 ************************************ 00:20:25.793 12:51:13 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:25.793 12:51:13 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:25.793 12:51:13 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:25.793 12:51:13 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:25.793 12:51:13 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:25.793 12:51:13 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.793 12:51:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:25.793 ************************************ 00:20:25.793 START TEST bdev_fio 00:20:25.793 ************************************ 00:20:25.793 12:51:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:20:25.793 12:51:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:25.793 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:25.793 12:51:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:25.793 12:51:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:25.793 ************************************ 00:20:25.793 START TEST bdev_fio_rw_verify 00:20:25.793 ************************************ 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.793 12:51:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:25.793 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:25.794 fio-3.35 00:20:25.794 Starting 1 thread 00:20:38.033 00:20:38.033 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90961: Wed Nov 6 12:51:25 2024 00:20:38.033 read: IOPS=8526, BW=33.3MiB/s (34.9MB/s)(333MiB/10001msec) 00:20:38.033 slat (usec): min=21, max=128, avg=29.16, stdev= 6.81 00:20:38.033 clat (usec): min=12, max=438, avg=185.67, stdev=71.77 00:20:38.033 lat (usec): min=43, max=498, avg=214.83, stdev=72.88 00:20:38.033 clat percentiles (usec): 00:20:38.033 | 50.000th=[ 184], 99.000th=[ 338], 99.900th=[ 375], 99.990th=[ 416], 00:20:38.033 | 99.999th=[ 441] 00:20:38.033 write: IOPS=8929, BW=34.9MiB/s (36.6MB/s)(344MiB/9873msec); 0 zone resets 00:20:38.033 slat (usec): min=10, max=242, avg=23.26, stdev= 7.04 00:20:38.033 clat (usec): min=78, max=1272, avg=431.89, stdev=62.21 00:20:38.033 lat (usec): min=98, max=1514, avg=455.16, stdev=64.01 00:20:38.033 clat percentiles (usec): 00:20:38.033 | 50.000th=[ 433], 99.000th=[ 578], 99.900th=[ 660], 99.990th=[ 947], 00:20:38.033 | 99.999th=[ 1270] 00:20:38.033 bw ( KiB/s): min=33672, max=37888, per=98.95%, avg=35343.16, stdev=1532.54, samples=19 00:20:38.033 iops : min= 8418, max= 9472, avg=8835.79, stdev=383.14, samples=19 00:20:38.033 lat (usec) : 20=0.01%, 50=0.01%, 100=7.22%, 250=31.03%, 500=55.03% 00:20:38.033 lat (usec) : 750=6.70%, 1000=0.01% 00:20:38.033 lat (msec) : 2=0.01% 00:20:38.033 cpu : usr=98.53%, sys=0.64%, ctx=19, majf=0, minf=7390 00:20:38.033 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:38.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.033 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:38.033 issued rwts: total=85270,88157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:38.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:38.033 00:20:38.033 Run status group 0 (all jobs): 00:20:38.033 READ: bw=33.3MiB/s (34.9MB/s), 33.3MiB/s-33.3MiB/s (34.9MB/s-34.9MB/s), io=333MiB (349MB), run=10001-10001msec 00:20:38.033 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=344MiB (361MB), run=9873-9873msec 00:20:38.599 ----------------------------------------------------- 00:20:38.599 Suppressions used: 00:20:38.599 count bytes template 00:20:38.599 1 7 /usr/src/fio/parse.c 00:20:38.599 254 24384 /usr/src/fio/iolog.c 00:20:38.599 1 8 libtcmalloc_minimal.so 00:20:38.599 1 904 libcrypto.so 00:20:38.599 ----------------------------------------------------- 00:20:38.599 00:20:38.599 00:20:38.599 real 0m12.968s 00:20:38.599 user 0m13.270s 00:20:38.599 sys 0m0.840s 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.599 ************************************ 00:20:38.599 END TEST bdev_fio_rw_verify 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:38.599 ************************************ 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:20:38.599 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5dcd8c06-6a6c-42eb-9d26-2f8fcb67e706"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5dcd8c06-6a6c-42eb-9d26-2f8fcb67e706",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5dcd8c06-6a6c-42eb-9d26-2f8fcb67e706",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "190b6dbd-2a92-4e74-858b-c9fbf0fbe6ea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "18bdb342-db33-4519-9424-3d216556d8c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fd5544e7-4b68-4963-bcc4-6eb9caafb84f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:38.600 /home/vagrant/spdk_repo/spdk 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:38.600 00:20:38.600 real 0m13.192s 00:20:38.600 user 0m13.377s 00:20:38.600 sys 0m0.936s 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:38.600 12:51:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:38.600 ************************************ 00:20:38.600 END TEST bdev_fio 00:20:38.600 ************************************ 00:20:38.600 12:51:27 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:38.600 12:51:27 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:38.600 12:51:27 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:38.600 12:51:27 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.600 12:51:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.600 ************************************ 00:20:38.600 START TEST bdev_verify 00:20:38.600 ************************************ 00:20:38.600 12:51:27 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:38.859 [2024-11-06 12:51:27.356403] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:38.859 [2024-11-06 12:51:27.356589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91122 ] 00:20:39.118 [2024-11-06 12:51:27.543661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:39.118 [2024-11-06 12:51:27.685963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.118 [2024-11-06 12:51:27.685965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.692 Running I/O for 5 seconds... 00:20:42.019 11522.00 IOPS, 45.01 MiB/s [2024-11-06T12:51:31.612Z] 13162.50 IOPS, 51.42 MiB/s [2024-11-06T12:51:32.549Z] 12991.00 IOPS, 50.75 MiB/s [2024-11-06T12:51:33.485Z] 12711.50 IOPS, 49.65 MiB/s [2024-11-06T12:51:33.485Z] 12495.40 IOPS, 48.81 MiB/s 00:20:44.828 Latency(us) 00:20:44.828 [2024-11-06T12:51:33.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.828 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:44.828 Verification LBA range: start 0x0 length 0x2000 00:20:44.828 raid5f : 5.02 6302.63 24.62 0.00 0.00 30594.70 281.13 26929.34 00:20:44.828 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:44.828 Verification LBA range: start 0x2000 length 0x2000 00:20:44.828 raid5f : 5.02 6192.29 24.19 0.00 0.00 31139.50 446.84 28001.75 00:20:44.828 [2024-11-06T12:51:33.485Z] =================================================================================================================== 00:20:44.828 [2024-11-06T12:51:33.485Z] Total : 12494.93 48.81 0.00 0.00 30864.65 281.13 28001.75 00:20:46.205 00:20:46.205 real 0m7.413s 00:20:46.205 user 0m13.498s 00:20:46.205 sys 0m0.374s 00:20:46.205 12:51:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:46.205 ************************************ 00:20:46.205 END TEST bdev_verify 00:20:46.205 ************************************ 00:20:46.205 12:51:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:46.205 12:51:34 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:46.205 12:51:34 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:46.205 12:51:34 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:46.205 12:51:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.205 ************************************ 00:20:46.205 START TEST bdev_verify_big_io 00:20:46.205 ************************************ 00:20:46.205 12:51:34 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:46.205 [2024-11-06 12:51:34.812894] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:46.205 [2024-11-06 12:51:34.813072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91222 ] 00:20:46.464 [2024-11-06 12:51:34.988967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.723 [2024-11-06 12:51:35.140887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.723 [2024-11-06 12:51:35.140887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.290 Running I/O for 5 seconds... 00:20:49.601 506.00 IOPS, 31.62 MiB/s [2024-11-06T12:51:39.193Z] 600.50 IOPS, 37.53 MiB/s [2024-11-06T12:51:40.129Z] 612.67 IOPS, 38.29 MiB/s [2024-11-06T12:51:41.071Z] 634.50 IOPS, 39.66 MiB/s [2024-11-06T12:51:41.071Z] 660.00 IOPS, 41.25 MiB/s 00:20:52.414 Latency(us) 00:20:52.414 [2024-11-06T12:51:41.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.414 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:52.414 Verification LBA range: start 0x0 length 0x200 00:20:52.414 raid5f : 5.32 334.23 20.89 0.00 0.00 9526379.57 229.93 411804.39 00:20:52.414 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:52.414 Verification LBA range: start 0x200 length 0x200 00:20:52.414 raid5f : 5.31 335.04 20.94 0.00 0.00 9451887.90 310.92 409897.89 00:20:52.414 [2024-11-06T12:51:41.071Z] =================================================================================================================== 00:20:52.414 [2024-11-06T12:51:41.071Z] Total : 669.26 41.83 0.00 0.00 9489133.73 229.93 411804.39 00:20:53.823 00:20:53.823 real 0m7.751s 00:20:53.823 user 0m14.192s 00:20:53.823 sys 0m0.366s 00:20:53.823 12:51:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:53.823 12:51:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.823 ************************************ 00:20:53.823 END TEST bdev_verify_big_io 00:20:53.823 ************************************ 00:20:54.081 12:51:42 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:54.081 12:51:42 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:54.081 12:51:42 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:54.081 12:51:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:54.081 ************************************ 00:20:54.081 START TEST bdev_write_zeroes 00:20:54.081 ************************************ 00:20:54.081 12:51:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:54.081 [2024-11-06 12:51:42.610347] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:54.081 [2024-11-06 12:51:42.610545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91317 ] 00:20:54.339 [2024-11-06 12:51:42.788220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.339 [2024-11-06 12:51:42.929731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.906 Running I/O for 1 seconds... 00:20:56.281 20919.00 IOPS, 81.71 MiB/s 00:20:56.281 Latency(us) 00:20:56.281 [2024-11-06T12:51:44.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.281 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:56.281 raid5f : 1.01 20894.10 81.62 0.00 0.00 6102.86 2129.92 8579.26 00:20:56.281 [2024-11-06T12:51:44.938Z] =================================================================================================================== 00:20:56.281 [2024-11-06T12:51:44.938Z] Total : 20894.10 81.62 0.00 0.00 6102.86 2129.92 8579.26 00:20:57.215 00:20:57.215 real 0m3.295s 00:20:57.215 user 0m2.804s 00:20:57.215 sys 0m0.359s 00:20:57.215 12:51:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:57.215 ************************************ 00:20:57.215 END TEST bdev_write_zeroes 00:20:57.215 ************************************ 00:20:57.215 12:51:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:57.215 12:51:45 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:57.215 12:51:45 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:57.215 12:51:45 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:57.215 12:51:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:57.215 ************************************ 00:20:57.215 START TEST bdev_json_nonenclosed 00:20:57.215 ************************************ 00:20:57.215 12:51:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:57.478 [2024-11-06 12:51:45.972248] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:57.478 [2024-11-06 12:51:45.972424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91369 ] 00:20:57.742 [2024-11-06 12:51:46.156220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.742 [2024-11-06 12:51:46.297845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.742 [2024-11-06 12:51:46.298001] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:57.742 [2024-11-06 12:51:46.298044] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:57.742 [2024-11-06 12:51:46.298060] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:58.001 00:20:58.001 real 0m0.714s 00:20:58.001 user 0m0.438s 00:20:58.001 sys 0m0.169s 00:20:58.001 ************************************ 00:20:58.001 END TEST bdev_json_nonenclosed 00:20:58.001 12:51:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:58.001 12:51:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:58.001 ************************************ 00:20:58.001 12:51:46 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:58.001 12:51:46 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:58.001 12:51:46 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:58.001 12:51:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:58.001 ************************************ 00:20:58.001 START TEST bdev_json_nonarray 00:20:58.001 ************************************ 00:20:58.001 12:51:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:58.259 [2024-11-06 12:51:46.739373] Starting SPDK v25.01-pre git sha1 88726e83b / DPDK 24.03.0 initialization... 00:20:58.259 [2024-11-06 12:51:46.739586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91401 ] 00:20:58.518 [2024-11-06 12:51:46.924750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.518 [2024-11-06 12:51:47.067790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.518 [2024-11-06 12:51:47.067998] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:58.518 [2024-11-06 12:51:47.068030] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:58.518 [2024-11-06 12:51:47.068058] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:58.776 00:20:58.776 real 0m0.714s 00:20:58.776 user 0m0.445s 00:20:58.776 sys 0m0.164s 00:20:58.776 12:51:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:58.777 ************************************ 00:20:58.777 END TEST bdev_json_nonarray 00:20:58.777 ************************************ 00:20:58.777 12:51:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:58.777 12:51:47 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:58.777 00:20:58.777 real 0m50.344s 00:20:58.777 user 1m8.461s 00:20:58.777 sys 0m5.837s 00:20:58.777 12:51:47 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:58.777 ************************************ 00:20:58.777 END TEST blockdev_raid5f 00:20:58.777 ************************************ 00:20:58.777 12:51:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:59.036 12:51:47 -- spdk/autotest.sh@194 -- # uname -s 00:20:59.036 12:51:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:59.036 12:51:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.036 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:20:59.036 12:51:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:59.036 12:51:47 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:59.036 12:51:47 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:59.036 12:51:47 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:59.036 12:51:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.036 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:20:59.036 12:51:47 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:59.036 12:51:47 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:20:59.036 12:51:47 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:20:59.036 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:21:00.938 INFO: APP EXITING 00:21:00.938 INFO: killing all VMs 00:21:00.938 INFO: killing vhost app 00:21:00.938 INFO: EXIT DONE 00:21:00.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.938 Waiting for block devices as requested 00:21:01.203 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:01.203 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.141 Cleaning 00:21:02.141 Removing: /var/run/dpdk/spdk0/config 00:21:02.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:02.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:02.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:02.142 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:02.142 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:02.142 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:02.142 Removing: /dev/shm/spdk_tgt_trace.pid56845 00:21:02.142 Removing: /var/run/dpdk/spdk0 00:21:02.142 Removing: /var/run/dpdk/spdk_pid56616 00:21:02.142 Removing: /var/run/dpdk/spdk_pid56845 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57080 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57184 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57240 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57368 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57386 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57596 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57713 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57820 00:21:02.142 Removing: /var/run/dpdk/spdk_pid57942 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58045 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58084 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58126 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58197 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58308 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58783 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58860 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58934 00:21:02.142 Removing: /var/run/dpdk/spdk_pid58950 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59107 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59123 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59271 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59291 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59362 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59380 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59444 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59462 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59663 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59699 00:21:02.142 Removing: /var/run/dpdk/spdk_pid59788 00:21:02.142 Removing: /var/run/dpdk/spdk_pid61160 00:21:02.142 Removing: /var/run/dpdk/spdk_pid61377 00:21:02.142 Removing: /var/run/dpdk/spdk_pid61527 00:21:02.142 Removing: /var/run/dpdk/spdk_pid62177 00:21:02.142 Removing: /var/run/dpdk/spdk_pid62394 00:21:02.142 Removing: /var/run/dpdk/spdk_pid62540 00:21:02.142 Removing: /var/run/dpdk/spdk_pid63193 00:21:02.142 Removing: /var/run/dpdk/spdk_pid63524 00:21:02.142 Removing: /var/run/dpdk/spdk_pid63670 00:21:02.142 Removing: /var/run/dpdk/spdk_pid65090 00:21:02.142 Removing: /var/run/dpdk/spdk_pid65349 00:21:02.142 Removing: /var/run/dpdk/spdk_pid65494 00:21:02.142 Removing: /var/run/dpdk/spdk_pid66907 00:21:02.142 Removing: /var/run/dpdk/spdk_pid67166 00:21:02.142 Removing: /var/run/dpdk/spdk_pid67317 00:21:02.142 Removing: /var/run/dpdk/spdk_pid68732 00:21:02.142 Removing: /var/run/dpdk/spdk_pid69183 00:21:02.142 Removing: /var/run/dpdk/spdk_pid69329 00:21:02.142 Removing: /var/run/dpdk/spdk_pid70857 00:21:02.142 Removing: /var/run/dpdk/spdk_pid71123 00:21:02.142 Removing: /var/run/dpdk/spdk_pid71274 00:21:02.142 Removing: /var/run/dpdk/spdk_pid72786 00:21:02.142 Removing: /var/run/dpdk/spdk_pid73052 00:21:02.142 Removing: /var/run/dpdk/spdk_pid73198 00:21:02.142 Removing: /var/run/dpdk/spdk_pid74708 00:21:02.142 Removing: /var/run/dpdk/spdk_pid75208 00:21:02.142 Removing: /var/run/dpdk/spdk_pid75354 00:21:02.142 Removing: /var/run/dpdk/spdk_pid75498 00:21:02.142 Removing: /var/run/dpdk/spdk_pid75949 00:21:02.142 Removing: /var/run/dpdk/spdk_pid76718 00:21:02.142 Removing: /var/run/dpdk/spdk_pid77100 00:21:02.142 Removing: /var/run/dpdk/spdk_pid77811 00:21:02.142 Removing: /var/run/dpdk/spdk_pid78301 00:21:02.142 Removing: /var/run/dpdk/spdk_pid79142 00:21:02.142 Removing: /var/run/dpdk/spdk_pid79576 00:21:02.142 Removing: /var/run/dpdk/spdk_pid81579 00:21:02.142 Removing: /var/run/dpdk/spdk_pid82027 00:21:02.142 Removing: /var/run/dpdk/spdk_pid82476 00:21:02.142 Removing: /var/run/dpdk/spdk_pid84614 00:21:02.142 Removing: /var/run/dpdk/spdk_pid85106 00:21:02.142 Removing: /var/run/dpdk/spdk_pid85616 00:21:02.142 Removing: /var/run/dpdk/spdk_pid86702 00:21:02.401 Removing: /var/run/dpdk/spdk_pid87029 00:21:02.401 Removing: /var/run/dpdk/spdk_pid87986 00:21:02.401 Removing: /var/run/dpdk/spdk_pid88320 00:21:02.401 Removing: /var/run/dpdk/spdk_pid89275 00:21:02.401 Removing: /var/run/dpdk/spdk_pid89609 00:21:02.401 Removing: /var/run/dpdk/spdk_pid90296 00:21:02.401 Removing: /var/run/dpdk/spdk_pid90576 00:21:02.401 Removing: /var/run/dpdk/spdk_pid90638 00:21:02.401 Removing: /var/run/dpdk/spdk_pid90686 00:21:02.401 Removing: /var/run/dpdk/spdk_pid90943 00:21:02.401 Removing: /var/run/dpdk/spdk_pid91122 00:21:02.401 Removing: /var/run/dpdk/spdk_pid91222 00:21:02.401 Removing: /var/run/dpdk/spdk_pid91317 00:21:02.401 Removing: /var/run/dpdk/spdk_pid91369 00:21:02.401 Removing: /var/run/dpdk/spdk_pid91401 00:21:02.401 Clean 00:21:02.401 12:51:50 -- common/autotest_common.sh@1451 -- # return 0 00:21:02.401 12:51:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:02.401 12:51:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.401 12:51:50 -- common/autotest_common.sh@10 -- # set +x 00:21:02.401 12:51:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:02.401 12:51:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.401 12:51:50 -- common/autotest_common.sh@10 -- # set +x 00:21:02.401 12:51:50 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:02.401 12:51:50 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:02.401 12:51:50 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:02.401 12:51:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:02.401 12:51:50 -- spdk/autotest.sh@394 -- # hostname 00:21:02.401 12:51:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:02.659 geninfo: WARNING: invalid characters removed from testname! 00:21:29.204 12:52:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:30.139 12:52:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:32.710 12:52:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:36.050 12:52:24 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:38.581 12:52:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:41.885 12:52:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:44.416 12:52:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:44.416 12:52:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:44.416 12:52:32 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:44.416 12:52:32 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:44.416 12:52:32 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:44.416 12:52:32 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:44.416 + [[ -n 5213 ]] 00:21:44.416 + sudo kill 5213 00:21:44.424 [Pipeline] } 00:21:44.440 [Pipeline] // timeout 00:21:44.445 [Pipeline] } 00:21:44.460 [Pipeline] // stage 00:21:44.465 [Pipeline] } 00:21:44.479 [Pipeline] // catchError 00:21:44.489 [Pipeline] stage 00:21:44.491 [Pipeline] { (Stop VM) 00:21:44.503 [Pipeline] sh 00:21:44.782 + vagrant halt 00:21:48.069 ==> default: Halting domain... 00:21:51.360 [Pipeline] sh 00:21:51.636 + vagrant destroy -f 00:21:54.942 ==> default: Removing domain... 00:21:54.953 [Pipeline] sh 00:21:55.229 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:21:55.238 [Pipeline] } 00:21:55.252 [Pipeline] // stage 00:21:55.257 [Pipeline] } 00:21:55.271 [Pipeline] // dir 00:21:55.276 [Pipeline] } 00:21:55.290 [Pipeline] // wrap 00:21:55.295 [Pipeline] } 00:21:55.309 [Pipeline] // catchError 00:21:55.317 [Pipeline] stage 00:21:55.319 [Pipeline] { (Epilogue) 00:21:55.333 [Pipeline] sh 00:21:55.611 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:00.938 [Pipeline] catchError 00:22:00.940 [Pipeline] { 00:22:00.952 [Pipeline] sh 00:22:01.231 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:01.231 Artifacts sizes are good 00:22:01.238 [Pipeline] } 00:22:01.248 [Pipeline] // catchError 00:22:01.256 [Pipeline] archiveArtifacts 00:22:01.261 Archiving artifacts 00:22:01.355 [Pipeline] cleanWs 00:22:01.366 [WS-CLEANUP] Deleting project workspace... 00:22:01.366 [WS-CLEANUP] Deferred wipeout is used... 00:22:01.371 [WS-CLEANUP] done 00:22:01.373 [Pipeline] } 00:22:01.388 [Pipeline] // stage 00:22:01.392 [Pipeline] } 00:22:01.403 [Pipeline] // node 00:22:01.408 [Pipeline] End of Pipeline 00:22:01.444 Finished: SUCCESS